ÖØ°õºÚÁϳԹÏÍø998suÔÚÏß_ Íõά

½ÌÓý±³¾°£º

2008-2012 ¸·Ñôʦ·¶´óѧ£¬¼ÆËã»ú¿ÆÑ§Óë¼¼Êõѧʿ

2013-2016 À¥Ã÷Àí¹¤´óѧ£¬Èí¼þ¹¤³Ì˶ʿ

2017-2022 ÉϺ£´óѧ£¬    ¼ÆËã»úÓ¦Óü¼Êõ²©Ê¿

 

Ö°Òµ¾­Àú£º

2012-2013 ÄϾ©î£³½ÐÀ´´ÍøÂç¿Æ¼¼ÓÐÏÞ¹«Ë¾£¬Ñз¢

2016-2017 ÉϺ£Ã÷½³ÖÇÄÜϵͳÓÐÏÞ¹«Ë¾£¬Ñз¢

 

Ñо¿·½Ïò£º

Ç¿»¯Ñ§Ï°£¬¶àÖÇÄÜÌåÇ¿»¯Ñ§Ï°£¬ÖÇÄܾö²ß

 

²ÎÓëÏîÄ¿£º

1£©¹ú¼Ò×ÔÈ»¿ÆÑ§»ù½ðÖØ´óÏîÄ¿£»ÏîÄ¿Ãû³Æ£º¸´ÔÓº£¿öµäÐÍÎÞÈËͧ¼¯ÈºÓ¦ÓÃÑé֤ƽ̨Ñо¿£»ÏîÄ¿±àºÅ£º61991415¡£ 

2£©ÉϺ£ÊÐ2020Äê¶È¿Æ¼¼´´ÐÂÐÐΪ¼Æ»®£»ÏîÄ¿Ãû³Æ£ºÈÚºÏ֪ʶµÄÎÞÈËͧ×ÔÖ÷ÐÐ

Ϊ¾ö²ß·½·¨£»ÏîÄ¿±àºÅ£º20YF1413800¡£

 

¿ÆÑгɹû£º

[1] Wang W, Luo X, Li Y, et al. Unmanned surface vessel obstacle avoidance with prior knowledgebased reward shaping[J]. Concurrency and Computation: Practice and Experience, 2021. (SCI ÆÚ¿¯)

[2] Wang W, Zhang H, LiY, et al. USVsSim: Ageneral simulation platformfor unmanned surface vessels autonomous learning[J]. Concurrency and Computation: Practice and Experience, 2022. (SCI ÆÚ¿¯)

[3] Wang W, Li Y, Luo X, et al. Ocean image data augmentation in the USV virtual training scene[J]. Big Earth Data, 2020, 4(4): 451-463. (EI ÆÚ¿¯)

[4] Wang W, Xiangfeng Luo. Autonomous docking of the USV using Deep Reinforcement learning combine with observation enhanced, 2021 IEEE international Conference on Advances in Electrical Engineering and Computer Applications(AAEECA). (EI »áÒé)

[5] Li Y, Wang X, Wang W, et al. Learning adversarial policy in multiple scenes environment via multi-agent reinforcement learning[J]. Connection Science, 2021, 33(3): 407-426.(SCIÆÚ¿¯)

[6] Wang J, Wang X, Luo X, Wang W et al. SEM: Adaptive Staged Experience Access Mechanism for Reinforcement Learning[C]//2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2020: 1088-1095. (EI »áÒé)

[7] Zhang Z, Luo X, Liu T, Wang W et al. Proximal policy optimization with mixed distributed training[C]//2019 IEEE 31st international conference on tools with artificial intelligence (ICTAI). IEEE, 2019: 1452-1456. (EI »áÒé)