找回密碼
 To register

QQ登錄

只需一步,快速開(kāi)始

掃一掃,訪(fǎng)問(wèn)微社區(qū)

打印 上一主題 下一主題

Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu

[復(fù)制鏈接]
查看: 28227|回復(fù): 50
樓主
發(fā)表于 2025-3-21 16:08:50 | 只看該作者 |倒序?yàn)g覽 |閱讀模式
書(shū)目名稱(chēng)Introduction to Deep Learning
副標(biāo)題From Logical Calculu
編輯Sandro Skansi
視頻videohttp://file.papertrans.cn/474/473601/473601.mp4
概述Offers a welcome clarity of expression, maintaining mathematical rigor yet presenting the ideas in an intuitive and colourful manner.Includes references to open problems studied in other disciplines,
叢書(shū)名稱(chēng)Undergraduate Topics in Computer Science
圖書(shū)封面Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu
描述.This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website..Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intellige
出版日期Textbook 2018
關(guān)鍵詞Deep learning; Neural networks; Pattern recognition; Natural language processing; Autoencoders
版次1
doihttps://doi.org/10.1007/978-3-319-73004-2
isbn_softcover978-3-319-73003-5
isbn_ebook978-3-319-73004-2Series ISSN 1863-7310 Series E-ISSN 2197-1781
issn_series 1863-7310
copyrightSpringer International Publishing AG, part of Springer Nature 2018
The information of publication is updating

書(shū)目名稱(chēng)Introduction to Deep Learning影響因子(影響力)




書(shū)目名稱(chēng)Introduction to Deep Learning影響因子(影響力)學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning網(wǎng)絡(luò)公開(kāi)度




書(shū)目名稱(chēng)Introduction to Deep Learning網(wǎng)絡(luò)公開(kāi)度學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning被引頻次




書(shū)目名稱(chēng)Introduction to Deep Learning被引頻次學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning年度引用




書(shū)目名稱(chēng)Introduction to Deep Learning年度引用學(xué)科排名




書(shū)目名稱(chēng)Introduction to Deep Learning讀者反饋




書(shū)目名稱(chēng)Introduction to Deep Learning讀者反饋學(xué)科排名




單選投票, 共有 0 人參與投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用戶(hù)組沒(méi)有投票權(quán)限
沙發(fā)
發(fā)表于 2025-3-21 21:24:29 | 只看該作者
板凳
發(fā)表于 2025-3-22 01:15:13 | 只看該作者
地板
發(fā)表于 2025-3-22 07:33:15 | 只看該作者
Feedforward Neural Networks,present these abstract and graphical objects as mathematical objects (vectors, matrices and tensors). Rosenblatt’s perceptron rule is also presented in detail, which makes it clear that a multilayered perceptron is impossible. The Delta rule, as an alternative, is presented, and the idea of iterativ
5#
發(fā)表于 2025-3-22 10:01:16 | 只看該作者
Modifications and Extensions to a Feed-Forward Neural Network,em of local minima as one of the main problems in machine learning is explored with all of its intricacies. The main strategy against local minima is the idea of regularization, by adding a regularization parameter when learning. Both L1 and L2 regularizations are explored and explained in detail. T
6#
發(fā)表于 2025-3-22 16:33:35 | 只看該作者
Convolutional Neural Networks,regression accepts data, and defines 1D and 2D convolutional layers as a natural extension of the logistic regression. The chapter also details on how to connect the layers and dimensionality problems. The local receptive field is introduced as a core concept of any convolutional architecture and th
7#
發(fā)表于 2025-3-22 17:12:58 | 只看該作者
Recurrent Neural Networks, basic settings of learning (sequence to label, sequence to sequence of labels and sequences with no labels) are introduced and explained in probabilistic terms. The role of hidden states is presented in a detailed exposition (with abundant illustrations) in the setting of a simple recurrent network
8#
發(fā)表于 2025-3-22 22:21:13 | 只看該作者
Autoencoders,was left out in Chap.?., completing the exposition of the principal component analysis, and demonstrating what a distributed representation is in mathematical terms. The chapter then introduces the main unsupervised learning technique for deep learning, the autoencoder. The structural aspects are pr
9#
發(fā)表于 2025-3-23 05:02:37 | 只看該作者
10#
發(fā)表于 2025-3-23 06:13:04 | 只看該作者
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛(ài)論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點(diǎn)評(píng) 投稿經(jīng)驗(yàn)總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機(jī)版|小黑屋| 派博傳思國(guó)際 ( 京公網(wǎng)安備110108008328) GMT+8, 2026-1-25 19:16
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
内丘县| 武冈市| 青川县| 利辛县| 江城| 绵阳市| 武胜县| 怀宁县| 通榆县| 资源县| 南华县| 大方县| 惠安县| 方山县| 盈江县| 合阳县| 洛隆县| 临泉县| 鸡西市| 伊宁市| 富顺县| 浑源县| 水城县| 库车县| 长岛县| 宜兰市| 杭锦后旗| 陵水| 晋中市| 鞍山市| 泰来县| 通城县| 韶关市| 正安县| 梅州市| 乌拉特中旗| 冷水江市| 共和县| 池州市| 阿克苏市| 武胜县|