1.斯坦福大学公开课机器学习 (吴恩达 Andrew Ng)

http://open.163.com/special/opencourse/machinelearning.html

 

笔记

http://cs229.stanford.edu/syllabus.html

 

http://www.cnblogs.com/jerrylead/default.html?page=3

 http://www.cnblogs.com/madrabbit/

https://blog.csdn.net/xiahouzuoxin

https://blog.csdn.net/u010249583

https://blog.csdn.net/stdcoutzyx/article/details/17741475

https://blog.csdn.net/stdcoutzyx/article/details/53869661

 https://blog.csdn.net/app_12062011/article/details/50577717

 https://blog.csdn.net/antkillerfarm/article/details/52980075

 https://www.cnblogs.com/llhthinker/p/5351201.html

 https://blog.csdn.net/dingchenxixi/article/details/51479003

 

随着模型的复杂度增加,虽然偏差会不断减小,但方差先减小后增大,模型的泛化误差也是先减小后增大,因此需要在“欠拟合”和“过拟合”之间寻找合适的模型复杂度。衡量模型的复杂度通常有AIC准则(AkalikeInformation Criterion)、BIC准则(BayesianInformation Criterion)等方法。

 

https://blog.csdn.net/baidu_35231778/article/details/52221400

 

2.数学之美

http://mindhacks.cn/2008/09/21/the-magical-bayesian-method/

 3.svm

http://www.cnblogs.com/LeftNotEasy/archive/2011/05/02/basic-of-svm.html

 

http://www.10tiao.com/html/520/201711/2650725003/3.html

https://blog.csdn.net/v_july_v/article/details/7624837

https://blog.csdn.net/lch614730/article/details/17067027

https://www.jianshu.com/u/511ba5d71aef 

http://www.cnblogs.com/vipyoumay/p/7560061.html

4.k-means

http://blog.pluskid.org/?p=17

http://www.cnblogs.com/jerrylead/archive/2011/04/06/2006910.html

 

5.em

https://zhuanlan.zhihu.com/p/24655368 

https://blog.csdn.net/zhihua_oba/article/details/73776553

https://www.cnblogs.com/fxjwind/p/3896113.html

https://www.cnblogs.com/xuesong/p/4179459.html

https://www.cnblogs.com/yymn/p/4769736.html

6.pca

https://blog.csdn.net/mmc2015/article/details/42459753

https://blog.csdn.net/chlele0105/article/details/13004499

https://blog.csdn.net/zhangdadadawei/article/details/50929574

https://blog.csdn.net/mmc2015/article/details/42459753

7.svd

https://blog.csdn.net/zhongkejingwang/article/details/43083603

https://blog.csdn.net/xmu_jupiter

http://www.infoq.com/cn/articles/matrix-decomposition-of-recommend-system

https://blog.csdn.net/syani/article/details/52297093

https://blog.csdn.net/american199062/article/details/51344067

8.PRML

https://baijiahao.baidu.com/s?id=1585194960281334902&wfr=spider&for=pc

https://github.com//ctgk/PRML

9.

 

#——————————–

 

https://www.cnblogs.com/baihuaxiu/p/6725223.html

 

  1. BAT机器学习面试1000题系列 每日刷
  2. 从最大似然到EM算法浅解 2018.3.7
  3. 机器学习中的PR曲线和ROC曲线 2018.3.23
  4. VotingClassifier 模型聚合——投票 2018.3.25
  5. 非平衡数据机器学习 2018.3.25
  6. 机器学习:概率校准 2018.3.25
  7. 机器学习中的损失函数 (着重比较:hinge loss vs softmax loss 2018.3.25
  8. 机器学习常用的分类器比较 2018.3.25
  9. 线性判别分析(Linear Discriminant Analysis) 2018.3.26
  10. 机器学习常用 35 大算法盘点(附思维导图) 2018.3.26
  11. 机器学习的分类与主要算法对比 2018.3.26
  12. logistic函数和softmax函数 2018.3.27
  13. Logistic Regression(逻辑回归)原理及公式推导 2018.3.27
  14. 机器学习算法—随机森林实现(包括回归和分类) 2018.4.7

http://ykksmile.top/posts/55073/

 

这位成功转型机器学习的老炮,想把他多年的经验分享给你

https://blog.csdn.net/wemedia/details.html?id=38193

 

https://www.jianshu.com/u/12201cdd5d7a

 

code

https://github.com/lzhe72/MachineLearning/

 

版权声明:本文为javastart原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/javastart/p/8742850.html