標(biāo)題: Titlebook: Convex Optimization with Computational Errors; Alexander J. Zaslavski Book 2020 Springer Nature Switzerland AG 2020 convex optimization.ma [打印本頁(yè)] 作者: Fixate 時(shí)間: 2025-3-21 16:29
書目名稱Convex Optimization with Computational Errors影響因子(影響力)
書目名稱Convex Optimization with Computational Errors影響因子(影響力)學(xué)科排名
書目名稱Convex Optimization with Computational Errors網(wǎng)絡(luò)公開度
書目名稱Convex Optimization with Computational Errors網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Convex Optimization with Computational Errors被引頻次
書目名稱Convex Optimization with Computational Errors被引頻次學(xué)科排名
書目名稱Convex Optimization with Computational Errors年度引用
書目名稱Convex Optimization with Computational Errors年度引用學(xué)科排名
書目名稱Convex Optimization with Computational Errors讀者反饋
書目名稱Convex Optimization with Computational Errors讀者反饋學(xué)科排名
作者: 黃油沒有 時(shí)間: 2025-3-21 23:24
Subgradient Projection Algorithm,f convex–concave functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the sec作者: 認(rèn)識(shí) 時(shí)間: 2025-3-22 01:39
Gradient Algorithm with a Smooth Objective Function,rs. The problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these作者: 設(shè)想 時(shí)間: 2025-3-22 04:43
Continuous Subgradient Method,nvex–concave functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm we need a calculation of a subgradient of the objective function and a calculation of a projection on the feasible set. In each of th作者: figurine 時(shí)間: 2025-3-22 09:04 作者: Classify 時(shí)間: 2025-3-22 16:41 作者: Classify 時(shí)間: 2025-3-22 20:34
PDA-Based Method for Convex Optimization, steps. In each of these two steps there is a computational error. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the作者: 吊胃口 時(shí)間: 2025-3-22 22:56 作者: filial 時(shí)間: 2025-3-23 03:08
A Projected Subgradient Method for Nonsmooth Problems,this class of problems, an objective function is assumed to be convex but a set of admissible points is not necessarily convex. Our goal is to obtain an .-approximate solution in the presence of computational errors, where . is a given positive number.作者: Arb853 時(shí)間: 2025-3-23 08:50
Convex Optimization with Computational Errors978-3-030-37822-6Series ISSN 1931-6828 Series E-ISSN 1931-6836 作者: prodrome 時(shí)間: 2025-3-23 11:15
https://doi.org/10.1007/978-3-030-67572-1this class of problems, an objective function is assumed to be convex but a set of admissible points is not necessarily convex. Our goal is to obtain an .-approximate solution in the presence of computational errors, where . is a given positive number.作者: OTHER 時(shí)間: 2025-3-23 16:22 作者: 沒有準(zhǔn)備 時(shí)間: 2025-3-23 20:13
Springer Optimization and Its Applicationshttp://image.papertrans.cn/c/image/237847.jpg作者: Organization 時(shí)間: 2025-3-23 22:30
https://doi.org/10.1007/978-94-007-5934-3In this chapter we analyze the mirror descent algorithm for minimization of convex and nonsmooth functions and for computing the saddle points of convex–concave functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points.作者: 比賽用背帶 時(shí)間: 2025-3-24 02:28 作者: initiate 時(shí)間: 2025-3-24 08:20 作者: 貪婪性 時(shí)間: 2025-3-24 10:43 作者: machination 時(shí)間: 2025-3-24 18:20 作者: Fissure 時(shí)間: 2025-3-24 21:56
Minimization of Sharp Weakly Convex Functions,In this chapter we study the subgradient projection algorithm for minimization of sharp weakly convex functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points.作者: extract 時(shí)間: 2025-3-25 02:13
https://doi.org/10.1007/978-3-030-37822-6convex optimization; mathematical programming; computational error; nonlinear analysis; solving real-wor作者: 樸素 時(shí)間: 2025-3-25 05:18 作者: 不開心 時(shí)間: 2025-3-25 08:47
A Projected Subgradient Method for Nonsmooth Problems,this class of problems, an objective function is assumed to be convex but a set of admissible points is not necessarily convex. Our goal is to obtain an .-approximate solution in the presence of computational errors, where . is a given positive number.作者: 擴(kuò)張 時(shí)間: 2025-3-25 14:54
https://doi.org/10.1007/978-3-663-07526-4mate solution of the problem in the presence of computational errors. It is known that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. In our study, presented in this book, we take into consideration the fact tha作者: BINGE 時(shí)間: 2025-3-25 16:30
https://doi.org/10.1007/978-3-663-07526-4f convex–concave functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function while in the sec作者: 華而不實(shí) 時(shí)間: 2025-3-25 20:04
Safety and Epistemic Frankfurt Cases,rs. The problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these作者: Mnemonics 時(shí)間: 2025-3-26 00:10
https://doi.org/10.1007/978-3-030-67572-1nvex–concave functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm we need a calculation of a subgradient of the objective function and a calculation of a projection on the feasible set. In each of th作者: 不可比擬 時(shí)間: 2025-3-26 05:28 作者: 有權(quán)威 時(shí)間: 2025-3-26 09:41 作者: 斜谷 時(shí)間: 2025-3-26 14:59 作者: CROW 時(shí)間: 2025-3-26 16:53
https://doi.org/10.1007/978-3-030-67572-1r. In general, these two computational errors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we fin作者: arthrodesis 時(shí)間: 2025-3-26 21:15 作者: 確認(rèn) 時(shí)間: 2025-3-27 03:49 作者: 輪流 時(shí)間: 2025-3-27 07:14
1931-6828 nerally, different. This fact, which was not taken into account in the previous book, is indeed important in practice. For example, the subgradient projection algorithm consists of two steps. The first step is 978-3-030-37824-0978-3-030-37822-6Series ISSN 1931-6828 Series E-ISSN 1931-6836 作者: 刺穿 時(shí)間: 2025-3-27 09:56 作者: 不連貫 時(shí)間: 2025-3-27 14:25
Subgradient Projection Algorithm,all positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.作者: 激勵(lì) 時(shí)間: 2025-3-27 19:50 作者: Lignans 時(shí)間: 2025-3-28 01:26 作者: 缺陷 時(shí)間: 2025-3-28 02:53
Minimization of Quasiconvex Functions,l errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.作者: 積云 時(shí)間: 2025-3-28 10:09
https://doi.org/10.1007/978-3-663-07526-4t for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. In this chapter we discuss several algorithms which are studied in this book.作者: 細(xì)絲 時(shí)間: 2025-3-28 13:10
https://doi.org/10.1007/978-3-030-67572-1rors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.作者: 相互影響 時(shí)間: 2025-3-28 15:26 作者: 虛假 時(shí)間: 2025-3-28 22:21 作者: cardiac-arrest 時(shí)間: 2025-3-29 01:40 作者: PAEAN 時(shí)間: 2025-3-29 06:42
An Optimization Problems with a Composite Objective Function,rors are different. We show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.作者: Commonwealth 時(shí)間: 2025-3-29 10:51
A Zero-Sum Game with Two Players,e computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.作者: ACRID 時(shí)間: 2025-3-29 14:49 作者: ellagic-acid 時(shí)間: 2025-3-29 17:07
Continuous Subgradient Method, that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two calculations of our algorithm, we find out what approximate solution can be obtained and how much time one needs for this.作者: Foregery 時(shí)間: 2025-3-29 22:49 作者: aviator 時(shí)間: 2025-3-30 01:13
Safety and Epistemic Frankfurt Cases, step is a calculation of a gradient of the objective function while in the second one we calculate a projection on the feasible set. In each of these two steps there is a computational error. In general, these two computational errors are different.作者: 弄臟 時(shí)間: 2025-3-30 06:55
https://doi.org/10.1007/978-3-030-67572-1m generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know the computational errors for the two steps of our algorithm, we find out what approximate solution can be obtained and how many iterates one needs for this.