Skip to content

MathPhysSim/FERMI_RL_Paper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

225 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control

arXiv DOI

Contact: simon.hirlaender(at)sbg.ac.at

Official repository for the FERMI Free-Electron Laser (FEL) paper, utilizing model-based and model-free reinforcement learning methods to solve complex particle accelerator operation problems. This work demonstrates the practical application of deep RL for intensity optimization, comparing the sample-efficient AE-DYNA (model-based) with the high-performing NAF2 (model-free) algorithms.

Schematic Overview


Algorithms

Algorithm Type Noise Robust Sample Efficient
NAF Model-free βœ— βœ“
NAF2 Model-free βœ“ βœ“
ME-TRPO Model-based βœ— βœ“
AE-DYNA Model-based βœ“ βœ“

Quick Start

Installation (TensorFlow 2)

conda create -n fermi_rl python=3.8
conda activate fermi_rl
pip install -r requirements.txt

Running Experiments

# NAF2 (Model-Free)
python src/run_naf2.py

# AE-DYNA (Model-Based)
python src/AE_Dyna_Tensorflow_2.py

Note

The legacy script src/AEDYNA.py requires TensorFlow 1.15 and stable-baselines (v2) in a separate environment.


Results

FERMI FEL Performance

NAF2 Training NAF2 Convergence
NAF2 Training NAF2 Convergence
AE-DYNA Training AE-DYNA Verification
AE-DYNA Training AE-DYNA Verification

Inverted Pendulum Benchmarks

Noise Robustness Sample Efficiency (NAF vs AE-DYNA)
Noise Robustness Sample Efficiency

Project Structure

.
β”œβ”€β”€ src/                  # Python source code
β”‚   β”œβ”€β”€ run_naf2.py       # NAF2 agent (TF2)
β”‚   β”œβ”€β”€ AE_Dyna_Tensorflow_2.py # AE-DYNA agent (TF2)
β”‚   └── AEDYNA.py         # AE-DYNA agent (TF1.15 legacy)
β”œβ”€β”€ paper/                # LaTeX source and figures
β”‚   β”œβ”€β”€ main.tex
β”‚   └── Figures/
β”œβ”€β”€ data/                 # Experimental data
└── requirements.txt

Citation

If you use this work, please cite:

@software{hirlaender_fermi_rl,
  author       = {Hirlaender, Simon and Bruchon, Niky},
  title        = {FERMI RL Paper Code},
  year         = 2020,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.4348989},
  url          = {https://doi.org/10.5281/zenodo.4348989}
}

About

The repo for the FERMI FEL paper using model-based and model-free reinforcement learning methods to solve a particle accelerator operation problem.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors