Hu, Cheng, Wang, Chenxu, Luo, Weijun, Yang, Chaowen, Xiang, Liuyu, He, Zhaofeng (2025) A Multitask-Based Transfer Framework for Cooperative Multi-Agent Reinforcement Learning. Applied Sciences, 15 (4). doi:10.3390/app15042216
Reference Type | Journal (article/letter/editorial) | ||
---|---|---|---|
Title | A Multitask-Based Transfer Framework for Cooperative Multi-Agent Reinforcement Learning | ||
Journal | Applied Sciences | ||
Authors | Hu, Cheng | Author | |
Wang, Chenxu | Author | ||
Luo, Weijun | Author | ||
Yang, Chaowen | Author | ||
Xiang, Liuyu | Author | ||
He, Zhaofeng | Author | ||
Year | 2025 (February 19) | Volume | 15 |
Issue | 4 | ||
Publisher | MDPI AG | ||
DOI | doi:10.3390/app15042216Search in ResearchGate | ||
Generate Citation Formats | |||
Mindat Ref. ID | 18058644 | Long-form Identifier | mindat:1:5:18058644:2 |
GUID | 0 | ||
Full Reference | Hu, Cheng, Wang, Chenxu, Luo, Weijun, Yang, Chaowen, Xiang, Liuyu, He, Zhaofeng (2025) A Multitask-Based Transfer Framework for Cooperative Multi-Agent Reinforcement Learning. Applied Sciences, 15 (4). doi:10.3390/app15042216 | ||
Plain Text | Hu, Cheng, Wang, Chenxu, Luo, Weijun, Yang, Chaowen, Xiang, Liuyu, He, Zhaofeng (2025) A Multitask-Based Transfer Framework for Cooperative Multi-Agent Reinforcement Learning. Applied Sciences, 15 (4). doi:10.3390/app15042216 | ||
In | (2025, February) Applied Sciences Vol. 15 (4). MDPI AG |
References Listed
These are the references the publisher has listed as being connected to the article. Please check the article itself for the full list of references which may differ. Not all references are currently linkable within the Digital Library.
Zhao (2024) IEEE Trans. Intell. Transp. Syst. A Survey on Recent Advancements in Autonomous Driving Using Deep Reinforcement Learning: Applications, Challenges, and Solutions 25, 19365 | |
Not Yet Imported: IEEE Transactions on Neural Networks and Learning Systems - journal-article : 10.1109/TNNLS.2022.3142822 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: - journal-article : 10.1109/TIV.2023.3316196 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: - journal-article : 10.1007/s10462-021-09997-9 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: Artificial Intelligence - journal-article : 10.1016/j.artint.2023.103905 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Hu (2019) Adv. Neural Inf. Process. Syst. Modelling the dynamics of multiagent q-learning in repeated symmetric games: A mean field theoretic approach 32, 12134 | |
![]() | Vinyals, Oriol, Babuschkin, Igor, Czarnecki, Wojciech M., Mathieu, Michaël, Dudzik, Andrew, Chung, Junyoung, Choi, David H., Powell, Richard, Ewalds, Timo, Georgiev, Petko, Oh, Junhyuk, Horgan, Dan, Kroiss, Manuel, Danihelka, Ivo, Huang, Aja, Sifre, Laurent, Cai, Trevor, Agapiou, John P., Jaderberg, Max, Vezhnevets, Alexander S., Leblond, Rémi, Pohlen, Tobias, Dalibard, Valentin, Budden, David, Sulsky, Yury, Molloy, James, Paine, Tom L., Gulcehre, Caglar, Wang, Ziyu, Pfaff, Tobias, Wu, Yuhuai, Ring, Roman, Yogatama, Dani, Wünsch, Dario, McKinney, Katrina, Smith, Oliver, Schaul, Tom, Lillicrap, Timothy, Kavukcuoglu, Koray, Hassabis, Demis, Apps, Chris, Silver, David (2019) Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575 (7782). 350-354 doi:10.1038/s41586-019-1724-z |
Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A.S., Yeo, M., Makhzani, A., Küttler, H., Agapiou, J., and Schrittwieser, J. (2017). Starcraft ii: A new challenge for reinforcement learning. arXiv. | |
Not Yet Imported: - journal-article : 10.1609/aaai.v34i04.5878 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Berner, C., Brockman, G., Chan, B., Cheung, V., Dębiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., and Hesse, C. (2019). Dota 2 with large scale deep reinforcement learning. arXiv. | |
Ye (2020) Adv. Neural Inf. Process. Syst. Towards playing full moba games with deep reinforcement learning 33, 621 | |
Foerster (2019) Adv. Neural Inf. Process. Syst. Multi-agent common knowledge reinforcement learning 32, 9927 | |
Wang, J., Zhao, J., Cao, Z., Feng, R., Qin, R., and Yu, Y. (2023). Multi-Task Multi-Agent Shared Layers are Universal Cognition of Multi-Agent Coordination. arXiv. | |
Not Yet Imported: - journal-article : 10.3390/sym12040631 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: - journal-article : 10.1109/TNNLS.2024.3387397 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: Proceedings of the AAAI Conference on Artificial Intelligence - journal-article : 10.1609/aaai.v32i1.11757 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv. | |
Lou, X., Guo, J., Zhang, J., Wang, J., Huang, K., and Du, Y. (2023). Pecan: Leveraging policy ensemble for context-aware zero-shot human-ai coordination. arXiv. | |
Li (2024) J. Artif. Intell. Res. Tackling cooperative incompatibility for zero-shot human-ai coordination 80, 1139 | |
Liang, Y., Chen, D., Gupta, A., Du, S.S., and Jaques, N. (2024). Learning to Cooperate with Humans using Generative Agents. arXiv. | |
Carroll (2019) Adv. Neural Inf. Process. Syst. On the utility of learning about humans for human-ai coordination 32, 5174 | |
Not Yet Imported: - journal-article : 10.1126/scirobotics.adi8022 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Tirumala, D., Wulfmeier, M., Moran, B., Huang, S., Humplik, J., Lever, G., Haarnoja, T., Hasenclever, L., Byravan, A., and Batchelor, N. (2024). Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning. arXiv. | |
Not Yet Imported: - book-chapter : 10.1007/978-3-030-35699-6_1 If you would like this item imported into the Digital Library, please contact us quoting Book ID 9783030356989 | |
Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., and Graepel, T. (2019). Emergent coordination through competition. arXiv. | |
Deng, Y., Yu, Y., Ma, W., Wang, Z., Zhu, W., Zhao, J., and Zhang, Y. (2024). SMAC-Hard: Enabling Mixed Opponent Strategy Script and Self-play on SMAC. arXiv. | |
Ellis (2023) Adv. Neural Inf. Process. Syst. Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning 36, 37567 | |
Liu, L., Jiang, W., and Wang, Y. (2024). Tacit Learning with Adaptive Information Selection for Cooperative Multi-Agent Reinforcement Learning. arXiv. | |
Lan (2023) Adv. Neural Inf. Process. Syst. Contrastive modules with temporal attention for multi-task reinforcement learning 36, 36507 | |
Chen, W., Koenig, S., and Dilkina, B. (2024). MARL-LNS: Cooperative Multi-agent Reinforcement Learning via Large Neighborhoods Search. arXiv. | |
Yang (2022) Adv. Neural Inf. Process. Syst. Ldsa: Learning dynamic subtask assignment in cooperative multi-agent reinforcement learning 35, 1698 | |
Li (2025) Neural Netw. Skill matters: Dynamic skill learning for multi-agent cooperative reinforcement learning 181, 106852 | |
Wang, X., Zhang, S., Zhang, W., Dong, W., Chen, J., Wen, Y., and Zhang, W. (2024, January 10–15). Zsc-eval: An evaluation toolkit and benchmark for multi-agent zero-shot coordination. Proceedings of the Thirty-Eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, Vancouver, BC, Canada. | |
Not Yet Imported: - journal-article : 10.1023/A:1007379606734 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Finn, C., Abbeel, P., and Levine, S. (2017, January 6–11). Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the International Conference on Machine Learning, PMLR, Sydney, NSW, Australia. | |
Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., and Hadsell, R. (2016). Progressive neural networks. arXiv. | |
Chen (2024) ACM Comput. Surv. Multi-task learning in natural language processing: An overview 56, 1 | |
![]() | |
Teh (2017) Adv. Neural Inf. Process. Syst. Distral: Robust multitask reinforcement learning 30, 4499 | |
Xu (2020) Adv. Neural Inf. Process. Syst. Knowledge transfer in multi-task deep reinforcement learning for continuous control 33, 15146 | |
Deng, J., Wang, J., Wang, X., Cai, Y., and Liu, P. (2024). Multi-Task Multi-Objective Evolutionary Search Based on Deep Reinforcement Learning for Multi-Objective Vehicle Routing Problems with Time Windows. Symmetry, 16. | |
Liu (2021) Adv. Neural Inf. Process. Syst. Conflict-averse gradient descent for multi-task learning 34, 18878 | |
Fernando, H., Shen, H., Liu, M., Chaudhury, S., Murugesan, K., and Chen, T. (2023, January 1–5). Mitigating gradient bias in multi-objective learning: A provably convergent approach. Proceedings of the International Conference on Learning Representations, Kigali, Rwanda. | |
Sodhani, S., Zhang, A., and Pineau, J. (2021, January 18–24). Multi-task reinforcement learning with context-based representations. Proceedings of the International Conference on Machine Learning, Online. | |
Sun (2022) Adv. Neural Inf. Process. Syst. Paco: Parameter-compositional multi-task reinforcement learning 35, 21495 | |
Zhu, Y., Huang, S., Zuo, B., Zhao, D., and Sun, C. (2024). Multi-Task Multi-Agent Reinforcement Learning With Task-Entity Transformers and Value Decomposition Training. IEEE Trans. Autom. Sci. Eng. | |
Li, C., Dong, S., Yang, S., Hu, Y., Ding, T., Li, W., and Gao, Y. (2024). Multi-Task Multi-Agent Reinforcement Learning With Interaction and Task Representations. IEEE Trans. Neural Netw. Learn. Syst. | |
Not Yet Imported: - journal-article : 10.1109/TG.2023.3316697 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Bose, A., Du, S.S., and Fazel, M. (2024). Offline multi-task transfer rl with representational penalization. arXiv. | |
Tian (2023) Adv. Neural Inf. Process. Syst. Decompose a task into generalizable subtasks in multi-agent reinforcement learning 36, 78514 | |
Wang (2024) IEEE Access ATA-MAOPT: Multi-Agent Online Policy Transfer using Attention Mechanism with Time Abstraction 12, 158282 | |
Yang (2021) Adv. Neural Inf. Process. Syst. An efficient transfer learning framework for multiagent reinforcement learning 34, 17037 | |
Not Yet Imported: 2023 IEEE International Conference on Robotics and Automation (ICRA) - proceedings-article : 10.1109/ICRA48891.2023.10160557 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: - journal-article : 10.23919/JSEE.2022.000045 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Not Yet Imported: IEEE Transactions on Games - journal-article : 10.1109/TG.2023.3272386 If you would like this item imported into the Digital Library, please contact us quoting Journal ID | |
Yang, T., You, H., Hao, J., Zheng, Y., and Taylor, M.E. (2024, January 20–27). A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada. | |
Not Yet Imported: SpringerBriefs in Intelligent Systems - book : 10.1007/978-3-319-28929-8 If you would like this item imported into the Digital Library, please contact us quoting Book ID 9783319289274 | |
De Witt, C.S., Gupta, T., Makoviichuk, D., Makoviychuk, V., Torr, P.H., Sun, M., and Whiteson, S. (2020). Is independent learning all you need in the starcraft multi-agent challenge?. arXiv. | |
Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018, January 10–15). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proceedings of the International Conference on Machine Learning, Stockholm, Sweden. | |
![]() | |
Yu (2020) Adv. Neural Inf. Process. Syst. Gradient surgery for multi-task learning 33, 5824 | |
Du, Y., Czarnecki, W.M., Jayakumar, S.M., Farajtabar, M., Pascanu, R., and Lakshminarayanan, B. (2018). Adapting auxiliary losses using gradient similarity. arXiv. |
See Also
These are possibly similar items as determined by title/reference text matching only.
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() | |
![]() |