Python高維數(shù)據(jù)分析(英文版)/Python工程應(yīng)用系列叢書
定 價:43 元
叢書名:Python工程應(yīng)用系列叢書
- 作者:趙煜輝 編
- 出版時間:2020/8/1
- ISBN:9787560655772
- 出 版 社:西安電子科技大學(xué)出版社
- 中圖法分類:TP311.561
- 頁碼:268
- 紙張:膠版紙
- 版次:1
- 開本:16開
《Python高維數(shù)據(jù)分析(英文版)/Python工程應(yīng)用系列叢書》從矩陣計(jì)算如特征值分解和奇異值分解出發(fā),討論了正規(guī)方程的小二乘法模型引出欠秩線性方程組的求解方法問題;然后介紹了兩種有損的降維方法,即主成分分析(主成分回歸)和偏小二乘回歸,包括模型、 算法和多個實(shí)例,并擴(kuò)展到線性回歸的正則化方法,給出了嶺回歸和Lasso的原理算法和實(shí)例;最后通過紅外光譜的標(biāo)定遷移實(shí)例將線性模型擴(kuò)展到遷移學(xué)習(xí)領(lǐng)域。
本書每章都有基于Python語言和Sklearn機(jī)器學(xué)習(xí)庫的紅外光譜數(shù)據(jù)集分析的實(shí)例。紅外光譜集是關(guān)于物質(zhì)吸光率的純數(shù)據(jù),可以與其標(biāo)簽標(biāo)示的數(shù)據(jù)物質(zhì)濃度直接進(jìn)行回歸分析,讀者在閱讀中可以把精力大限度地集中在高維數(shù)據(jù)的建模、 算法實(shí)現(xiàn)和分析過程上。
《Python高維數(shù)據(jù)分析(英文版)/Python工程應(yīng)用系列叢書》既可作為信息管理和信息系統(tǒng)專業(yè)、計(jì)算機(jī)相關(guān)專業(yè)和大數(shù)據(jù)專業(yè)的教學(xué)用書,也可作為從事光譜分析、 化學(xué)分析的工程人員及化學(xué)計(jì)量學(xué)研究人員的參考書,還適合對數(shù)據(jù)分析和研究感興趣的其他Python工程師學(xué)習(xí)閱讀。本書引用的原始文獻(xiàn)和數(shù)據(jù)對上述人員是非常有幫助的。
Chapter 1 Basis of Matrix Calculation
1.1 Fundamental Concepts
1.1.1 Notation
1.1.2 “BiggerBlock” Interpretations of Matrix Multiplication
1.1.3 Fundamental Linear Algebra
1.1.4 Four Fundamental Subspaces of a Matrix
1.1.5 Vector Norms
1.1.6 Determinants
1.1.7 Properties of Determinants
1.2 The Most Basic Matrix Decomposition
1.2.1 Gaussian Elimination
1.2.2 The LU Decomposition
1.2.3 The LDM Factorization
1.2.4 The LDL Decomposition for Symmetric Matrices
1.2.5 Cholesky Decomposition
1.2.6 Applications and Examples of the Cholesky Decomposition
1.2.7 Eigendecomposition
1.2.8 Matrix Norms
1.2.9 Covariance Matrices
1.3 Singular Value Decomposition (SVD)
1.3.1 Orthogonalization
1.3.2 Existence Proof of the SVD
1.3.3 Partitioning the SVD
1.3.4 Properties and Interpretations of the SVD
1.3.5 Relationship between SVD and ED
1.3.6 Ellipsoidal Interpretation of the SVD
1.3.7 An Interesting Theorem
1.4 The Quadratic Form
1.4.1 Quadratic Form Theory
1.4.2 The Gaussian MultiVariate Probability Density Function
1.4.3 The Rayleigh Quotient
Chapter 2 The Solution of Least Squares Problems
2.1 Linear Least Squares Estimation
2.1.1 Example: Autoregressive Modelling
2.1.2 The LeastSquares Solution
2.1.3 Interpretation of the Normal Equations
2.1.4 Properties of the LS Estimate
2.1.5 Linear LeastSquares Estimation and the Cramer Rao Lower Bound
2.2 A Generalized “PseudoInverse” Approach to Solving the Leastsquares Problem
2.2.1 Least Squares Solution Using the SVD
2.2.2 Interpretation of the PseudoInverse
Chapter 3 Principal Component Analysis
3.1 Introductory Example
3.2 Theory
3.2.1 Taking Linear Combinations
3.2.2 Explained Variation
3.2.3 PCA as a Model
3.2.4 Taking More Components
3.3 History of PCA
3.4 Practical Aspects
3.4.1 Preprocessing
3.4.2 Choosing the Number of Components
3.4.3 When Using PCA for Other Purposes
3.4.4 Detecting Outliers
References
3.5 Sklearn PCA
3.5.1 Source Code
3.5.2 Examples
3.6 Principal Component Regression
3.6.1 Source Code
3.6.2 KFold CrossValidation
3.6.3 Examples
3.7 Subspace Methods for Dynamic Model Estimation in PAT Applications
3.7.1 Introduction
3.7.2 Theory
3.7.3 State Space Models in Chemometrics
3.7.4 Milk Coagulation Monitoring
3.7.5 State Space Based Monitoring
3.7.6 Results
3.7.7 Concluding remarks
3.7.8 Appendix
References
Chapter 4 Partial Least Squares Analysis
4.1 Basic Concept
4.1.1 Partial Least Squares
4.1.2 Form of Partial Least Squares
4.1.3 PLS Regression
4.1.4 Statistic
Reference
4.2 NIPALS and SIMPLS Algorithm
4.2.1 NIPALS
4.2.2 SIMPLS
References
4.3 Programming Method of Standard Partial Least Squares
4.3.1 Crossvalidation
4.3.2 Procedure of NIPALS
4.4 Example Application
4.4.1 Demo of PLS
4.4.2 Corn Dataset
4.4.3 Wheat Dataset
4.4.4 Pharmaceutical Tablet Dataset
4.5 Stack Partial Least Squares
4.5.1 Introduction
4.5.2 Theory of Stack Partial Least Squares
4.5.3 Demo of SPLS
4.5.4 Experiments
References
Chapter 5 Regularization
5.1 Regularization
5.1.1 Classification
5.1.2 Tikhonov Regularization
5.1.3 Regularizers for Sparsity
5.1.4 Other Uses of Regularization in Statistics and Machine Learning
5.2 Ridge Regression: Biased Estimation for Nonorthogonal Problems
5.2.1 Properties of Best Linear Unbiased Estimation
5.2.2 Ridge Regression
5.2.3 The Ridge Trace
5.2.4 Mean Square Error Properties of Ridge Regression
5.2.5 A General Form of Ridge Regression
5.2.6 Relation to Other Work in Regression
5.2.7 Selecting a Better Estimate of ?
References
5.3 Lasso
5.3.1 Introduction
5.3.2 Theory of the Lasso
References
5.4 The Example of Ridge Regression and Lasso Regression
5.4.1 Example
5.4.2 Practical Example
5.5 Sparse PCA
5.5.1 Introduction
5.5.2 Motivation and Method Details
5.5.3 SPCA for p ≥ n and Gene Expression Arrays
5.5.4 Demo of SPCA
References
Chapter 6 Transfer Method
6.1 Calibration Transfer of Spectral Models[1]
6.1.1 Introduction
6.1.2 Calibration Transfer Setting
6.1.3 Related Work
6.1.4 New or Adapted Methods
6.1.5 Standardfree Alternatives to Methods Requiring Transfer StandardsReferences
6.2 PLS Subspace Based Calibration Transfer for NIR Quantitative Analysis
6.2.1 Calibration Transfer Method
6.2.2 Experimental
6.2.3 Results and Discussion
6.2.4 Conclusion
References
6.3 Calibration Transfer Based on Affine Invariance for NIR without Standard Samples
6.3.1 Theory
6.3.2 Experimental
6.3.3 Results and Discussion
6.3.4 Conclusions