Publications

* indicates equal contribution.

Please see Google Scholar for a complete and up-to-date list of publications.

2024

  1. Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement Learning
    Shengyi Huang, Quentin Gallouédec, Florian Felten, Antonin Raffin, Rousslan Fernand Julien Dossa, Yanxiao Zhao, Ryan Sullivan, Viktor Makoviychuk, Denys Makoviichuk, Mohamad H. Danesh, and 23 more authors
    arXiv preprint arXiv:2402.03046, 2024
  2. Knowledge Transfer in Multi-Objective Multi-Agent Reinforcement Learning via Generalized Policy Improvement
    Vicente Nejar Almeida, Lucas N. Alegre, and Ana L. C. Bazzan
    Computer Science and Information Systems, 2024

2023

  1. Multi-Step Generalized Policy Improvement by Leveraging Approximate Models
    Lucas N. AlegreAna L. C. BazzanAnn Nowé, and Bruno C. da Silva
    In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023
  2. A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning
    Florian Felten*Lucas N. Alegre*Ann NowéAna L. C. Bazzan, El-Ghazali Talbi, Grégoire Danoy, and Bruno C. da Silva
    In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2023
  3. Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
    In Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023
  4. routechoice.png
    ALA
    RouteChoiceEnv: a Route Choice Library for Multiagent Reinforcement Learning
    Luiz A. Thomasini, Lucas N. AlegreGabriel O. Ramos, and Ana L. C. Bazzan
    In Adaptive and Learning Agents Workshop at AAMAS, 2023

2022

  1. Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
    Lucas N. AlegreAna L. C. Bazzan, and Bruno C. da Silva
    In Proceedings of the 39th International Conference on Machine Learning, 2022
  2. MO-Gym: A Library of Multi-Objective Reinforcement Learning Environments
    Lucas N. AlegreFlorian Felten, El-Ghazali Talbi, Grégoire DanoyAnn NowéAna L. C. Bazzan, and Bruno C. da Silva
    In Proceedings of the 34th Benelux Conference on Artificial Intelligence BNAIC/Benelearn 2022, 2022
  3. On the Explainability and Expressiveness of Function Approximation Methods in RL-Based Traffic Signal Control
    Lincoln V. Schreiber, Lucas N. AlegreAna L. C. Bazzan, and Gabriel O. Ramos
    In 2022 International Joint Conference on Neural Networks (IJCNN), 2022

2021

  1. Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
    Lucas N. AlegreAna L. C. Bazzan, and Bruno C. da Silva
    In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2021
    Best Paper Award at LXAI Workshop @ ICML 2021
  2. Using Reinforcement Learning to Control Traffic Signals in a Real-World Scenario: an Approach Based on Linear Function Approximation
    Lucas N. AlegreTheresa Ziemke, and Ana L. C. Bazzan
    IEEE Transactions on Intelligent Transportation Systems, 2021
  3. Reinforcement Learning vs. Rule-Based Adaptive Traffic Signal Control: A Fourier Basis Linear Function Approximation for Traffic Signal Control
    Theresa ZiemkeLucas N. Alegre, and Ana L. C. Bazzan
    AI Communications, 2021
  4. Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
    Lucas N. AlegreAna L. C. Bazzan, and Bruno C. da Silva
    PeerJ Computer Science, 2021

2020

  1. SelfieArt: Interactive Multi-Style Transfer for Selfies and Videos with Soft Transitions
    Lucas N. Alegre, and Manuel M. Oliveira
    In Proceedings of the 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images, 2020

2019

  1. Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise
    Aline Weber, Lucas N. AlegreJim Torresen, and Bruno C. da Silva
    In Proceedings of the International Conference on New Interfaces for Musical Expression, 2019