-
Notifications
You must be signed in to change notification settings - Fork 21
Home
Hou Shengren edited this page Mar 26, 2026
·
9 revisions
RL-ADN is a Python library for reinforcement learning research on energy storage dispatch in active distribution networks. It combines environment simulation, packaged feeder data, baseline optimization components, and the Laurent power-flow solver in one research-friendly codebase.
- Versatile Benchmarking: Model diverse energy arbitrage tasks with full flexibility.
- Laurent Power Flow: Over 10 times faster computational speed compared to traditional methods.
- Seamless Transition: Designed for both simulated environments and real-world applications.
- Open-source: Easily accessible for modifications, customizations, and further research.
-
Phase A Topology Scenarios:
34-busand69-busfeeders now support hand-authored radial topology scenarios that switch onreset().
RL-ADN now supports topology variation as an environment scenario, aligned with the TP1-TP7 evaluation style used in the topology-aware GNN transferability paper. In this phase:
- topology is not part of the RL action space
- topology changes only on
reset() -
34-busand69-busfeeder scenario libraries are provided - graph and topology metadata can be exported for later GNN work
To install RL-ADN:
pip install RL-ADNOr install from git:
git clone https://github.com/EnergyQuantResearch/RL-ADN.git
cd RL-ADN
pip install -e .from rl_adn import PowerNetEnv, make_env_config
config = make_env_config(node=34, topology_scenario="TP2", return_graph=True)
env = PowerNetEnv(config)
state, info = env.reset(return_info=True)
print(info["topology_scenario"])To sample from multiple topologies:
config = make_env_config(
node=34,
topology_mode="scenario_pool",
topology_pool=["TP2", "TP3", "TP4"],
return_graph=True,
)Potential directions include richer ESS models, topology-aware learning, and broader benchmark coverage across feeders and operating conditions.
MIT License