找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu

[復(fù)制鏈接]
查看: 28232|回復(fù): 50
樓主
發(fā)表于 2025-3-21 16:08:50 | 只看該作者 |倒序瀏覽 |閱讀模式
書目名稱Introduction to Deep Learning
副標(biāo)題From Logical Calculu
編輯Sandro Skansi
視頻videohttp://file.papertrans.cn/474/473601/473601.mp4
概述Offers a welcome clarity of expression, maintaining mathematical rigor yet presenting the ideas in an intuitive and colourful manner.Includes references to open problems studied in other disciplines,
叢書名稱Undergraduate Topics in Computer Science
圖書封面Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu
描述.This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website..Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intellige
出版日期Textbook 2018
關(guān)鍵詞Deep learning; Neural networks; Pattern recognition; Natural language processing; Autoencoders
版次1
doihttps://doi.org/10.1007/978-3-319-73004-2
isbn_softcover978-3-319-73003-5
isbn_ebook978-3-319-73004-2Series ISSN 1863-7310 Series E-ISSN 2197-1781
issn_series 1863-7310
copyrightSpringer International Publishing AG, part of Springer Nature 2018
The information of publication is updating

書目名稱Introduction to Deep Learning影響因子(影響力)




書目名稱Introduction to Deep Learning影響因子(影響力)學(xué)科排名




書目名稱Introduction to Deep Learning網(wǎng)絡(luò)公開度




書目名稱Introduction to Deep Learning網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Introduction to Deep Learning被引頻次




書目名稱Introduction to Deep Learning被引頻次學(xué)科排名




書目名稱Introduction to Deep Learning年度引用




書目名稱Introduction to Deep Learning年度引用學(xué)科排名




書目名稱Introduction to Deep Learning讀者反饋




書目名稱Introduction to Deep Learning讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶組沒有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 21:24:29 | 只看該作者
板凳
發(fā)表于 2025-3-22 01:15:13 | 只看該作者
地板
發(fā)表于 2025-3-22 07:33:15 | 只看該作者
Feedforward Neural Networks,present these abstract and graphical objects as mathematical objects (vectors, matrices and tensors). Rosenblatt’s perceptron rule is also presented in detail, which makes it clear that a multilayered perceptron is impossible. The Delta rule, as an alternative, is presented, and the idea of iterativ
5#
發(fā)表于 2025-3-22 10:01:16 | 只看該作者
Modifications and Extensions to a Feed-Forward Neural Network,em of local minima as one of the main problems in machine learning is explored with all of its intricacies. The main strategy against local minima is the idea of regularization, by adding a regularization parameter when learning. Both L1 and L2 regularizations are explored and explained in detail. T
6#
發(fā)表于 2025-3-22 16:33:35 | 只看該作者
Convolutional Neural Networks,regression accepts data, and defines 1D and 2D convolutional layers as a natural extension of the logistic regression. The chapter also details on how to connect the layers and dimensionality problems. The local receptive field is introduced as a core concept of any convolutional architecture and th
7#
發(fā)表于 2025-3-22 17:12:58 | 只看該作者
Recurrent Neural Networks, basic settings of learning (sequence to label, sequence to sequence of labels and sequences with no labels) are introduced and explained in probabilistic terms. The role of hidden states is presented in a detailed exposition (with abundant illustrations) in the setting of a simple recurrent network
8#
發(fā)表于 2025-3-22 22:21:13 | 只看該作者
Autoencoders,was left out in Chap.?., completing the exposition of the principal component analysis, and demonstrating what a distributed representation is in mathematical terms. The chapter then introduces the main unsupervised learning technique for deep learning, the autoencoder. The structural aspects are pr
9#
發(fā)表于 2025-3-23 05:02:37 | 只看該作者
10#
發(fā)表于 2025-3-23 06:13:04 | 只看該作者
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點評 投稿經(jīng)驗總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2026-1-25 20:51
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
黄龙县| 奉化市| 阿拉尔市| 保靖县| 辽中县| 鄂尔多斯市| 临沭县| 南皮县| 广河县| 休宁县| 高清| 武穴市| 冀州市| 盐边县| 扎鲁特旗| 铁力市| 澳门| 龙陵县| 平阳县| 信宜市| 桃园县| 宝丰县| 米脂县| 丰台区| 新建县| 南岸区| 龙陵县| 吴旗县| 综艺| 永顺县| 电白县| 新蔡县| 梅州市| 汨罗市| 正镶白旗| 石嘴山市| 晋州市| 林甸县| 新营市| 郎溪县| 岢岚县|