找回密碼
 To register

QQ登錄

只需一步,快速開(kāi)始

掃一掃,訪問(wèn)微社區(qū)

打印 上一主題 下一主題

Titlebook: Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles; Yuecheng Li,Hongwen He Book 2022 Springer Nature Switzer

[復(fù)制鏈接]
查看: 20540|回復(fù): 35
樓主
發(fā)表于 2025-3-21 19:28:56 | 只看該作者 |倒序?yàn)g覽 |閱讀模式
書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles
編輯Yuecheng Li,Hongwen He
視頻videohttp://file.papertrans.cn/265/264661/264661.mp4
叢書(shū)名稱(chēng)Synthesis Lectures on Advances in Automotive Technology
圖書(shū)封面Titlebook: Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles;  Yuecheng Li,Hongwen He Book 2022 Springer Nature Switzer
描述The urgent need for vehicle electrification and improvement in fuel efficiency has gained increasing attention worldwide. Regarding this concern, the solution of hybrid vehicle systems has proven its value from academic research and industry applications, where energy management plays a key role in taking full advantage of hybrid electric vehicles (HEVs). There are many well-established energy management approaches, ranging from rules-based strategies to optimization-based methods, that can provide diverse options to achieve higher fuel economy performance. However, the research scope for energy management is still expanding with the development of intelligent transportation systems and the improvement in onboard sensing and computing resources. Owing to the boom in machine learning, especially deep learning and deep reinforcement learning (DRL), research on learning-based energy management strategies (EMSs) is gradually gaining more momentum. They have shown great promise in not onlybeing capable of dealing with big data, but also in generalizing previously learned rules to new scenarios without complex manually tunning. Focusing on learning-based energy management with DRL as the
出版日期Book 2022
版次1
doihttps://doi.org/10.1007/978-3-031-79206-9
isbn_softcover978-3-031-79194-9
isbn_ebook978-3-031-79206-9Series ISSN 2576-8107 Series E-ISSN 2576-8131
issn_series 2576-8107
copyrightSpringer Nature Switzerland AG 2022
The information of publication is updating

書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles影響因子(影響力)




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles影響因子(影響力)學(xué)科排名




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles網(wǎng)絡(luò)公開(kāi)度




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles網(wǎng)絡(luò)公開(kāi)度學(xué)科排名




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles被引頻次




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles被引頻次學(xué)科排名




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles年度引用




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles年度引用學(xué)科排名




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles讀者反饋




書(shū)目名稱(chēng)Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶(hù)組沒(méi)有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 21:45:57 | 只看該作者
板凳
發(fā)表于 2025-3-22 04:01:09 | 只看該作者
Learning of EMSs in Continuous State Space-Discrete Action Space, and efficient learning algorithm in discrete action spaces. Therefore, to address energy management problems with continuous state—discrete action spaces, this chapter describes an energy management method based on deep Q-learning, and further conduct research on its learning stability, optimizatio
地板
發(fā)表于 2025-3-22 05:47:02 | 只看該作者
5#
發(fā)表于 2025-3-22 09:50:49 | 只看該作者
2576-8107 ut also in generalizing previously learned rules to new scenarios without complex manually tunning. Focusing on learning-based energy management with DRL as the978-3-031-79194-9978-3-031-79206-9Series ISSN 2576-8107 Series E-ISSN 2576-8131
6#
發(fā)表于 2025-3-22 15:31:53 | 只看該作者
7#
發(fā)表于 2025-3-22 20:50:22 | 只看該作者
8#
發(fā)表于 2025-3-22 23:06:48 | 只看該作者
Michael Hubbard,Marisol Smith,Renu Kohliion for the DRL-based EMS is described in this chapter. Because all DRL-based EMSs described in this book are represented by DNNs, they share the same hardware deployment procedure. The DRL-based EMS in Chapter 3 is utilized here for the illustration.
9#
發(fā)表于 2025-3-23 02:49:26 | 只看該作者
Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles
10#
發(fā)表于 2025-3-23 06:40:26 | 只看該作者
2576-8107 cern, the solution of hybrid vehicle systems has proven its value from academic research and industry applications, where energy management plays a key role in taking full advantage of hybrid electric vehicles (HEVs). There are many well-established energy management approaches, ranging from rules-b
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛(ài)論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評(píng) 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國(guó)際 ( 京公網(wǎng)安備110108008328) GMT+8, 2026-1-24 17:43
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
陆川县| 武安市| 黄山市| 武乡县| 宁国市| 通道| 河西区| 德令哈市| 凌云县| 镇平县| 景泰县| 竹北市| 封开县| 侯马市| 噶尔县| 兴安盟| 北川| 鄂伦春自治旗| 刚察县| 台南市| 屏南县| 佳木斯市| 镇平县| 渭南市| 隆尧县| 灌南县| 苗栗市| 屏边| 思茅市| 新竹市| 江城| 宜丰县| 乡城县| 松原市| 永城市| 碌曲县| 屏山县| 福建省| 洪江市| 元朗区| 库车县|