API Reference
Complete API documentation for Coopetition-Gym v0.3.0.
Generated: 2026-01-13
Quick Navigation
| Module | Description |
|---|---|
| Factory Functions | Environment creation |
| Core: Value Functions | TR-1 value creation |
| Core: Interdependence | TR-1 structural coupling |
| Core: Trust Dynamics | TR-2 trust evolution |
| Core: Equilibrium | Payoff computation |
| Environments | Environment classes |
| Wrappers | PettingZoo adapters |
| Configuration | Dataclass configs |
Package Overview
import coopetition_gym
# Version and metadata
coopetition_gym.__version__ # '0.3.0'
coopetition_gym.__author__ # 'Vik Pant, Eric Yu'
# List available environments
coopetition_gym.list_environments()
# ['TrustDilemma-v0', 'PartnerHoldUp-v0', ...]
Action Space Semantics
All environments in Coopetition-Gym v1.x use the uniaxial treatment of coopetition:
- Action space:
Box(low=0, high=endowment, shape=(n_agents,))representing cooperation/investment levels - Interpretation: Actions specify how much each agent contributes to joint value creation
- Competition: Modeled through structural parameters (interdependence, bargaining shares) rather than explicit competitive actions
This design reflects one established paradigm in coopetition research. Version 2.x will introduce biaxial action spaces with independent cooperation and competition dimensions. See Theoretical Foundations for rationale.
Factory Functions
make
coopetition_gym.make(
env_id: str,
**kwargs
) -> gymnasium.Env
Create a Gymnasium-compatible coopetition environment.
Parameters:
| Name | Type | Description |
|---|---|---|
env_id |
str |
Environment identifier (see list_environments()) |
**kwargs |
Environment-specific configuration parameters |
Returns:
| Type | Description |
|---|---|
gymnasium.Env |
Gymnasium-compatible environment instance |
Raises:
| Exception | Condition |
|---|---|
ValueError |
Unknown environment ID |
TypeError |
Invalid configuration parameter |
Example:
import coopetition_gym
import numpy as np
# Basic usage
env = coopetition_gym.make("TrustDilemma-v0")
obs, info = env.reset(seed=42)
# With custom parameters
env = coopetition_gym.make(
"PlatformEcosystem-v0",
n_developers=8,
max_steps=200
)
# Step through environment
actions = np.array([50.0, 50.0])
obs, rewards, terminated, truncated, info = env.step(actions)
See Also:
make_parallel()- PettingZoo Parallel APImake_aec()- PettingZoo AEC API- Environment Reference - Full environment documentation
make_parallel
coopetition_gym.make_parallel(
env_id: str,
obs_config: Optional[ObservationConfig] = None,
render_mode: Optional[str] = None,
**kwargs
) -> CoopetitionParallelEnv
Create a PettingZoo Parallel API environment for simultaneous agent moves.
Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
env_id |
str |
required | Environment identifier |
obs_config |
ObservationConfig |
None |
Observation configuration (see ObservationConfig) |
render_mode |
str |
None |
Rendering mode (None, "ansi", "rgb_array") |
**kwargs |
Environment-specific parameters |
Returns:
| Type | Description |
|---|---|
CoopetitionParallelEnv |
PettingZoo-compatible parallel environment |
Example:
import coopetition_gym
# Basic parallel environment
env = coopetition_gym.make_parallel("TrustDilemma-v0")
observations, infos = env.reset(seed=42)
# Actions are dictionaries keyed by agent name
actions =
observations, rewards, terminations, truncations, infos = env.step(actions)
# With realistic observation asymmetry (agents can't see others' trust toward them)
from coopetition_gym import ObservationConfig
env = coopetition_gym.make_parallel(
"TrustDilemma-v0",
obs_config=ObservationConfig.realistic_asymmetry()
)
Notes:
- All agents act simultaneously each step
- Observations and actions are dictionaries keyed by agent ID
- Agent IDs follow pattern
"agent_0","agent_1", etc.
See Also:
make_aec()- Sequential moves- ObservationConfig - Observation configuration
make_aec
coopetition_gym.make_aec(
env_id: str,
obs_config: Optional[ObservationConfig] = None,
render_mode: Optional[str] = None,
**kwargs
) -> CoopetitionAECEnv
Create a PettingZoo AEC (Agent Environment Cycle) environment for sequential moves.
Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
env_id |
str |
required | Environment identifier |
obs_config |
ObservationConfig |
None |
Observation configuration |
render_mode |
str |
None |
Rendering mode |
**kwargs |
Environment-specific parameters |
Returns:
| Type | Description |
|---|---|
CoopetitionAECEnv |
PettingZoo AEC environment |
Example:
import coopetition_gym
env = coopetition_gym.make_aec("TrustDilemma-v0")
env.reset(seed=42)
# Iterate through agents sequentially
for agent in env.agent_iter(): observation, reward, termination, truncation, info = env.last()
if termination or truncation: action = None
else: action = 50.0 # Your policy here
env.step(action)
Notes:
- Agents take turns acting in sequence
- Use
agent_iter()for standard iteration pattern - Use
last()to get current agent’s observation
list_environments
coopetition_gym.list_environments() -> List[str]
Return list of all available environment identifiers.
Returns:
| Type | Description |
|---|---|
List[str] |
Sorted list of environment IDs |
Example:
import coopetition_gym
envs = coopetition_gym.list_environments()
print(envs)
# ['ApacheProject-v0', 'CoalitionFormation-v0', 'CooperativeNegotiation-v0',
# 'DynamicPartnerSelection-v0', 'LoyaltyTeam-v0', 'PartnerHoldUp-v0',
# 'PlatformEcosystem-v0', 'PublicGoods-v0', 'RecoveryRace-v0',
# 'RenaultNissan-v0', 'ReputationMarket-v0', 'SLCD-v0',
# 'SynergySearch-v0', 'TeamProduction-v0', 'TrustDilemma-v0']
version
coopetition_gym.version() -> str
Return the package version string.
Returns:
| Type | Description |
|---|---|
str |
Version in semver format (e.g., "0.3.0") |
info
coopetition_gym.info() -> None
Print package information including version, authors, and available environments.
Example:
import coopetition_gym
coopetition_gym.info()
# Coopetition-Gym v0.3.0
# Authors:
# Vik Pant - Faculty of Information, University of Toronto
# Eric Yu - Faculty of Information and Department of Computer Science, University of Toronto
# ...
Type Aliases
Common type aliases used throughout the API:
from numpy.typing import NDArray
import numpy as np
# Array types
FloatArray = NDArray[np.floating] # General floating-point array
IntArray = NDArray[np.integer] # Integer array
# Common function signatures
ActionType = Union[float, NDArray[np.floating]]
ObservationType = NDArray[np.floating]
RewardType = NDArray[np.floating]
Module Index
Core Mathematical Modules
| Module | Description | Technical Report |
|---|---|---|
core.value_functions |
Individual and synergistic value computation | TR-1 §5-6 |
core.interdependence |
Structural dependency matrices | TR-1 §3-4 |
core.trust_dynamics |
Trust and reputation evolution | TR-2 §4-6 |
core.equilibrium |
Payoff computation and equilibrium solving | TR-1 §7 |
core.collective_action |
Collective action and loyalty mechanics | TR-3 |
core.reciprocity |
Reciprocity dynamics (skeleton) | TR-4 |
Environment Modules
| Module | Description |
|---|---|
envs.base |
Abstract environment classes |
envs.dyadic_envs |
2-agent environments |
envs.ecosystem_envs |
N-agent environments |
envs.benchmark_envs |
Research benchmarks |
envs.case_study_envs |
Validated case studies |
envs.extended_envs |
Extended mechanics |
envs.collective_action_envs |
TR-3 collective action environments |
Wrapper Modules
| Module | Description |
|---|---|
envs.wrappers.observation_config |
Observation configuration |
envs.wrappers.parallel_wrapper |
PettingZoo Parallel adapter |
envs.wrappers.aec_wrapper |
PettingZoo AEC adapter |
Changelog
v0.3.0 (Current)
- Added 5 TR-4 reciprocity environments
- Added 5 TR-3 collective action environments
- 20 environments now available
- Implemented reciprocity mechanics from TR-4
- Implemented loyalty mechanics from TR-3
v0.2.0
- Added
ObservationConfigfor configurable information asymmetry - Added
make_parallel()andmake_aec()factory functions - Added PettingZoo wrapper classes
- Enhanced type annotations throughout
v0.1.0
- Initial release
- 5 TR-1 environments + 5 TR-2 environments implemented
- Core mathematical framework complete
See Also
- Getting Started - Tutorial introduction
- Environment Reference - Detailed environment docs
- Theoretical Foundations - Mathematical background
Technical Reports
- TR-1: Computational Foundations for Strategic Coopetition: Formalizing Interdependence and Complementarity (arXiv:2510.18802)
- TR-2: Computational Foundations for Strategic Coopetition: Formalizing Trust and Reputation Dynamics (arXiv:2510.24909)
- TR-3: Computational Foundations for Strategic Coopetition: Formalizing Collective Action and Loyalty (arXiv:2601.16237)
- TR-4: Computational Foundations for Strategic Coopetition: Formalizing Sequential Interaction and Reciprocity (arXiv:2604.01240)