随着科研人员在使用神经网络训练时不断的尝试,为我们留下了很多有用的技巧,合理的运用这些技巧可以使自己的模型得到更好的拟合效果。

全连接网络虽然在拟合问题上比较强大,但太强大的拟合效果也带来了其它的麻烦,这就是过拟合问题。

首先我们看一个例子,这次将原有的4个异或带护具扩充成了上百个具有异或特征的数据集,然后通过全连接网络将它们进行分类。

实例描述:构建异或数据集模拟样本,在构建一个简单的多层神经网络来拟合其样本特征,观察其出现前泥河的现象,接着通过增大网络复杂性的方式来优化欠拟合问题,使其出现过拟合现象。

  1. \'\'\'
  2. 生成随机数据
  3. \'\'\'
  4. np.random.seed(10)
  5. #特征个数
  6. num_features = 2
  7. #样本个数
  8. num_samples = 320
  9. #n返回长度为特征的数组 正太分布
  10. mean = np.random.randn(num_features)
  11. print(\'mean\',mean)
  12. cov = np.eye(num_features)
  13. print(\'cov\',cov)
  14. X,Y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  15. #转换为二种类别
  16. Y = Y % 2
  17. xr = []
  18. xb = []
  19. for (l,k) in zip(Y[:],X[:]):
  20. if l == 0.0:
  21. xr.append([k[0],k[1]])
  22. else:
  23. xb.append([k[0],k[1]])
  24. xr = np.array(xr)
  25. xb = np.array(xb)
  26. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  27. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')

可以看到图上数据分为两类,左上和左下是一类,右上和右下是一类。

  1. \'\'\'
  2. 定义变量
  3. \'\'\'
  4. #学习率
  5. learning_rate = 1e-4
  6. #输入层节点个数
  7. n_input = 2
  8. #隐藏层节点个数
  9. n_hidden = 2
  10. #输出节点数
  11. n_label = 1
  12. input_x = tf.placeholder(tf.float32,[None,n_input])
  13. input_y = tf.placeholder(tf.float32,[None,n_label])
  14. \'\'\'
  15. 定义学习参数
  16. h1 代表隐藏层
  17. h2 代表输出层
  18. \'\'\'
  19. weights = {
  20. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  21. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  22. }
  23. biases = {
  24. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  25. \'h2\':tf.Variable(tf.zeros([n_label]))
  26. }
  27. \'\'\'
  28. 定义网络模型
  29. \'\'\'
  30. #隐藏层
  31. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  32. #代价函数
  33. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights[\'h2\']),biases[\'h2\']))
  34. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  35. train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
  1. \'\'\'
  2. 开始训练
  3. \'\'\'
  4. training_epochs = 30000
  5. sess = tf.InteractiveSession()
  6. #初始化
  7. sess.run(tf.global_variables_initializer())
  8. for epoch in range(training_epochs):
  9. _,lo = sess.run([train,loss],feed_dict={input_x:X,input_y:np.reshape(Y,[-1,1])})
  10. if epoch % 1000 == 0:
  11. print(\'Epoch {0} loss {1}\'.format(epoch,lo))
  12. \'\'\'
  13. 数据可视化
  14. \'\'\'
  15. nb_of_xs = 200
  16. xs1 = np.linspace(-1,8,num = nb_of_xs)
  17. xs2 = np.linspace(-1,8,num = nb_of_xs)
  18. #创建网格
  19. xx,yy = np.meshgrid(xs1,xs2)
  20. #初始化和填充 classfication plane
  21. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  22. for i in range(nb_of_xs):
  23. for j in range(nb_of_xs):
  24. #计算每个输入样本对应的分类标签
  25. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  26. #创建 color map用于显示
  27. cmap = ListedColormap([
  28. colorConverter.to_rgba(\'r\',alpha = 0.30),
  29. colorConverter.to_rgba(\'b\',alpha = 0.30),
  30. ])
  31. #显示各个样本边界
  32. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  33. plt.show()

  

可以看到,模型在迭代训练20000次之后梯度更新就放缓了,而且loss值约等于16%并且准确率不高,所可视化的图片也没有将数据完全分开。

图上这种现象就叫做欠拟合,即没有完全拟合到想要得到的真实数据情况。

欠拟合的原因并不是模型不行,而是我们的学习方法无法更精准地学习到适合的模型参数。模型越薄弱,对训练的要求就越高。但是可以采用增加节点或者增加层的方式,让模型具有更高的拟合性,从而降低模型的训练难度。

将隐藏层的节点个数改为200,代码如下:

  1. #隐藏层节点个数
  2. n_hidden = 200

从图中可以看到强大的全连接网络,仅仅通过一个隐藏层,使用200个神经元就可以把数据划分的那么细致。而loss值也在逐渐变小,30000次之后已经变成了0.056.

那么对于上面的模型好不好呢?我们再取少量的数据放到模型中验证一下,然后用同样的方式在坐标系中可视化。

  1. \'\'\'
  2. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  3. \'\'\'
  4. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  5. #转换为二种类别
  6. test_y = test_y % 2
  7.  
  8.  
  9. xr = []
  10. xb = []
  11. for (l,k) in zip(test_y[:],test_x[:]):
  12. if l == 0.0:
  13. xr.append([k[0],k[1]])
  14. else:
  15. xb.append([k[0],k[1]])
  16. xr = np.array(xr)
  17. xb = np.array(xb)
  18. plt.figure()
  19. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  20. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  21. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1])})
  22. print(\'Test data loss {0}\'.format(lo))
  23. nb_of_xs = 200
  24. xs1 = np.linspace(-1,8,num = nb_of_xs)
  25. xs2 = np.linspace(-1,8,num = nb_of_xs)
  26. #创建网格
  27. xx,yy = np.meshgrid(xs1,xs2)
  28. #初始化和填充 classfication plane
  29. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  30. for i in range(nb_of_xs):
  31. for j in range(nb_of_xs):
  32. #计算每个输入样本对应的分类标签
  33. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  34. #创建 color map用于显示
  35. cmap = ListedColormap([
  36. colorConverter.to_rgba(\'r\',alpha = 0.30),
  37. colorConverter.to_rgba(\'b\',alpha = 0.30),
  38. ])
  39. #显示各个样本边界
  40. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  41. plt.show()

从这次运行结果,我们可以看到测试集的loss增加到了0.21,并没有原来训练时候的那么好0.056,模型还是原来的模型,但是这次却只框住了少量的样本。这种现象就是过拟合,它和欠拟合都是我们在训练模型中不愿意看到的现象,我们要的是真正的拟合在测试情况下能够变现出训练时的良好效果。

避免过拟合的方法有很多:常用的有early stopping、数据集扩展、正则化、弃权等,下面会使用这些方法来对该案例来进行优化。

 Tensorflow中封装了L2正则化的函数可以直接使用:

  1. tf.nn.l2_loss(t,name=None)

函数原型如下:

  1. def l2_loss(t, name=None):
  2. r"""L2 Loss.
  3.  
  4. Computes half the L2 norm of a tensor without the `sqrt`:
  5.  
  6. output = sum(t ** 2) / 2
  7.  
  8. Args:
  9. t: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
  10. Typically 2-D, but may have any dimensions.
  11. name: A name for the operation (optional).
  12.  
  13. Returns:
  14. A `Tensor`. Has the same type as `t`. 0-D.
  15. """
  16. result = _op_def_lib.apply_op("L2Loss", t=t, name=name)
  17. return result

但是并没有提供L1正则化函数,但是可以自己组合:

  1. tf.reduce_sum(tf.abs(w))

我们在代码中加入L2正则化:并设置返回参数lamda=1.6,修改代价函数如下:

  1. loss = tf.reduce_mean(tf.square(y_pred - input_y)) + lamda * tf.nn.l2_loss(weights[\'h1\'])/ num_samples + tf.nn.l2_loss(weights[\'h2\']) *lamda/ num_samples

  

训练集的代价值从0.056增加到了0.106,但是测试集的代价值仅仅从0.21降到了0.0197,其效果并不是太明显。

下面再试试通过增大数据集的方法来改善过度拟合的情况,这里不再生产一个随机样本,而是每次从循环生成1000个数据。部分代码如下:

  1. for epoch in range(training_epochs):
  2. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  3. #转换为二种类别
  4. train_y = train_y % 2
  5. _,lo = sess.run([train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1])})
  6. if epoch % 1000 == 0:
  7. print(\'Epoch {0} loss {1}\'.format(epoch,lo))

  

这次测试集代价值降到了0.04,比训练集还低,泛化效果更好了。

在TensorFlow中弃权的函数原型如下:

  1. def dropout(x,keep_prob,noise_shape=None,seed=None,name=None)

其中的参数意义如下:

  • x:输入的模型节点
  • keep_prob:保存率,如果为1,则代表全部进行学习,如果为0.8,则代表丢弃20%的节点,只让80%的节点参与学习。
  • noise_shape:指定x中,哪些维度可以使用dropout技术。
  • seed:随机选择节点的过程中随机数的种子值。

dropout改变了神经网络的结构,它仅仅是属于训练时的方法,所以一般在进行测试时要将dropout的keep_prob设置为1,代表不需要进行丢弃,否则会影响模型的正常输出。

程序中加入了弃权,并且把keep_prob设置为0.5.

  1. \'\'\'
  2. 定义网络模型
  3. \'\'\'
  4. #隐藏层
  5. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  6. keep_prob = tf.placeholder(dtype=tf.float32)
  7. layer_1_drop = tf.nn.dropout(layer_1,keep_prob)
  8. #代价函数
  9. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1_drop, weights[\'h2\']),biases[\'h2\']))
  10. loss = tf.reduce_mean(tf.square(y_pred - input_y))

输出结果如下,可以看到加入弃权改进也并不是多大,和L2正则化效果差不多。

从上面结果可以看到代价值在来回波动,这主要是因为在训练后期出现了抖动现象,这表明学习率有点大了,这里我们可以添加退化学习率。

在使用优化器代码部分添加learning_rate,设置总步数为30000,每执行1000步,学习率衰减0.9,部分代码如下:

  1. \'\'\'
  2. 定义网络模型
  3. \'\'\'
  4. #隐藏层
  5. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  6. keep_prob = tf.placeholder(dtype=tf.float32)
  7. layer_1_drop = tf.nn.dropout(layer_1,keep_prob)
  8. #代价函数
  9. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1_drop, weights[\'h2\']),biases[\'h2\']))
  10. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  11. global_step = tf.Variable(0,trainable=False)
  12. decaylearning_rate = tf.train.exponential_decay(learning_rate,global_step,1000,0.9)
  13. train = tf.train.AdamOptimizer(decaylearning_rate).minimize(loss,global_step = global_step)
  14.  
  15. \'\'\'
  16. 开始训练
  17. \'\'\'
  18. training_epochs = 30000
  19. sess = tf.InteractiveSession()
  20. #初始化
  21. sess.run(tf.global_variables_initializer())
  22. for epoch in range(training_epochs):
  23. #执行一次train global_step变量会自加1
  24. rate,_,lo = sess.run([decaylearning_rate,train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1]),keep_prob:0.5})
  25. if epoch % 1000 == 0:
  26. print(\'Epoch {0} learning_rate {1} loss {2} \'.format(epoch,rate,lo))

我们可以看到学习率是衰减了,但是效果并不是很明显,代价值还是在震荡,我们可以尝试调整一些参数,是的效果更好,这是一件需要耐心的事情。

完整代码:

  1. # -*- coding: utf-8 -*-
  2. """
  3. Created on Thu Apr 26 15:02:16 2018
  4. @author: zy
  5. """
  6.  
  7. \'\'\'
  8. 通过一个过拟合的案例 来学习全网络训练中的优化技巧 比如:正则化,弃权等
  9. \'\'\'
  10.  
  11. import tensorflow as tf
  12. import numpy as np
  13. from sklearn.utils import shuffle
  14. import matplotlib.pyplot as plt
  15. import random
  16. from matplotlib.colors import colorConverter, ListedColormap
  17. \'\'\'
  18. 生成数据集
  19. \'\'\'
  20.  
  21. def get_one_hot(labels,num_classes):
  22. \'\'\'
  23. one_hot编码
  24. args:
  25. labels : 输如类标签
  26. num_classes:类别个数
  27. \'\'\'
  28. m = np.zeros([labels.shape[0],num_classes])
  29. for i in range(labels.shape[0]):
  30. m[i][labels[i]] = 1
  31. return m
  32. def generate(sample_size,mean,cov,diff,num_classes=2,one_hot = False):
  33. \'\'\'
  34. 因为没有医院的病例数据,所以模拟生成一些样本
  35. 按照指定的均值和方差生成固定数量的样本
  36. args:
  37. sample_size:样本个数
  38. mean: 长度为 M 一维ndarray或者list 对应每个特征的均值
  39. cov N X Nndarray或者list 协方差 对称矩阵
  40. diff:长度为 类别-1 list i元素为第i个类别和第0个类别均值的差值 [特征1差,特征2差....] 如果长度不够,后面每个元素值取diff最后一个元素
  41. num_classes:分类数
  42. one_hot : one_hot编码
  43. \'\'\'
  44. #每一类的样本数 假设有1000个样本 分两类,每类500个样本
  45. sample_per_class = int(sample_size/num_classes)
  46. \'\'\'
  47. 多变量正态分布
  48. mean : 1-D array_like, of length N . Mean of the N-dimensional distribution. 数组类型,每一个元素对应一维的平均值
  49. cov : 2-D array_like, of shape (N, N) .Covariance matrix of the distribution. It must be symmetric and positive-semidefinite
  50. for proper sampling.
  51. size:shape. Given a shape of, for example, (m,n,k), m*n*k samples are generated, and packed in an m-by-n-by-k arrangement.
  52. Because each sample is N-dimensional, the output shape is (m,n,k,N). If no shape is specified, a single (N-D) sample is
  53. returned.
  54. \'\'\'
  55. #生成均值为mean,协方差为cov sample_per_class x len(mean)个样本 类别为0
  56. X0 = np.random.multivariate_normal(mean,cov,sample_per_class)
  57. Y0 = np.zeros(sample_per_class,dtype=np.int32)
  58. #对于diff长度不够进行处理
  59. if len(diff) != num_classes-1:
  60. tmp = np.zeros(num_classes-1)
  61. tmp[0:len(diff)] = diff
  62. tmp[len(diff):] = diff[-1]
  63. else:
  64. tmp = diff
  65. for ci,d in enumerate(tmp):
  66. \'\'\'
  67. list变成 索引-元素树,同时迭代索引和元素本身
  68. \'\'\'
  69. #生成均值为mean+d,协方差为cov sample_per_class x len(mean)个样本 类别为ci+1
  70. X1 = np.random.multivariate_normal(mean+d,cov,sample_per_class)
  71. Y1 = (ci+1)*np.ones(sample_per_class,dtype=np.int32)
  72. #合并X0,X1 按列拼接
  73. X0 = np.concatenate((X0,X1))
  74. Y0 = np.concatenate((Y0,Y1))
  75. if one_hot:
  76. Y0 = get_one_hot(Y0,num_classes)
  77. #打乱顺序
  78. X,Y = shuffle(X0,Y0)
  79. return X,Y
  80. def example_overfit():
  81. \'\'\'
  82. 显示一个过拟合的案例
  83. \'\'\'
  84. \'\'\'
  85. 生成随机数据
  86. \'\'\'
  87. np.random.seed(10)
  88. #特征个数
  89. num_features = 2
  90. #样本个数
  91. num_samples = 320
  92. #n返回长度为特征的数组 正太分布
  93. mean = np.random.randn(num_features)
  94. print(\'mean\',mean)
  95. cov = np.eye(num_features)
  96. print(\'cov\',cov)
  97. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  98. #转换为二种类别
  99. train_y = train_y % 2
  100. xr = []
  101. xb = []
  102. for (l,k) in zip(train_y[:],train_x[:]):
  103. if l == 0.0:
  104. xr.append([k[0],k[1]])
  105. else:
  106. xb.append([k[0],k[1]])
  107. xr = np.array(xr)
  108. xb = np.array(xb)
  109. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  110. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  111. \'\'\'
  112. 定义变量
  113. \'\'\'
  114. #学习率
  115. learning_rate = 1e-4
  116. #输入层节点个数
  117. n_input = 2
  118. #隐藏层节点个数
  119. n_hidden = 200 #设置为2则会欠拟合
  120. #输出节点数
  121. n_label = 1
  122. input_x = tf.placeholder(tf.float32,[None,n_input])
  123. input_y = tf.placeholder(tf.float32,[None,n_label])
  124. \'\'\'
  125. 定义学习参数
  126. h1 代表隐藏层
  127. h2 代表输出层
  128. \'\'\'
  129. weights = {
  130. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  131. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  132. }
  133. biases = {
  134. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  135. \'h2\':tf.Variable(tf.zeros([n_label]))
  136. }
  137. \'\'\'
  138. 定义网络模型
  139. \'\'\'
  140. #隐藏层
  141. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  142. #代价函数
  143. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights[\'h2\']),biases[\'h2\']))
  144. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  145. train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
  146. \'\'\'
  147. 开始训练
  148. \'\'\'
  149. training_epochs = 30000
  150. sess = tf.InteractiveSession()
  151. #初始化
  152. sess.run(tf.global_variables_initializer())
  153. for epoch in range(training_epochs):
  154. _,lo = sess.run([train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1])})
  155. if epoch % 1000 == 0:
  156. print(\'Epoch {0} loss {1}\'.format(epoch,lo))
  157. \'\'\'
  158. 数据可视化
  159. \'\'\'
  160. nb_of_xs = 200
  161. xs1 = np.linspace(-1,8,num = nb_of_xs)
  162. xs2 = np.linspace(-1,8,num = nb_of_xs)
  163. #创建网格
  164. xx,yy = np.meshgrid(xs1,xs2)
  165. #初始化和填充 classfication plane
  166. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  167. for i in range(nb_of_xs):
  168. for j in range(nb_of_xs):
  169. #计算每个输入样本对应的分类标签
  170. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  171. #创建 color map用于显示
  172. cmap = ListedColormap([
  173. colorConverter.to_rgba(\'r\',alpha = 0.30),
  174. colorConverter.to_rgba(\'b\',alpha = 0.30),
  175. ])
  176. #显示各个样本边界
  177. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  178. plt.show()
  179. \'\'\'
  180. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  181. \'\'\'
  182. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  183. #转换为二种类别
  184. test_y = test_y % 2
  185. xr = []
  186. xb = []
  187. for (l,k) in zip(test_y[:],test_x[:]):
  188. if l == 0.0:
  189. xr.append([k[0],k[1]])
  190. else:
  191. xb.append([k[0],k[1]])
  192. xr = np.array(xr)
  193. xb = np.array(xb)
  194. plt.figure()
  195. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  196. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  197. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1])})
  198. print(\'Test data loss {0}\'.format(lo))
  199. nb_of_xs = 200
  200. xs1 = np.linspace(-1,8,num = nb_of_xs)
  201. xs2 = np.linspace(-1,8,num = nb_of_xs)
  202. #创建网格
  203. xx,yy = np.meshgrid(xs1,xs2)
  204. #初始化和填充 classfication plane
  205. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  206. for i in range(nb_of_xs):
  207. for j in range(nb_of_xs):
  208. #计算每个输入样本对应的分类标签
  209. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  210. #创建 color map用于显示
  211. cmap = ListedColormap([
  212. colorConverter.to_rgba(\'r\',alpha = 0.30),
  213. colorConverter.to_rgba(\'b\',alpha = 0.30),
  214. ])
  215. #显示各个样本边界
  216. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  217. plt.show()
  218. def example_l2_norm():
  219. \'\'\'
  220. 显利用l2范数缓解过拟合
  221. \'\'\'
  222. \'\'\'
  223. 生成随机数据
  224. \'\'\'
  225. np.random.seed(10)
  226. #特征个数
  227. num_features = 2
  228. #样本个数
  229. num_samples = 320
  230. #n返回长度为特征的数组 正太分布
  231. mean = np.random.randn(num_features)
  232. print(\'mean\',mean)
  233. cov = np.eye(num_features)
  234. print(\'cov\',cov)
  235. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  236. #转换为二种类别
  237. train_y = train_y % 2
  238. xr = []
  239. xb = []
  240. for (l,k) in zip(train_y[:],train_x[:]):
  241. if l == 0.0:
  242. xr.append([k[0],k[1]])
  243. else:
  244. xb.append([k[0],k[1]])
  245. xr = np.array(xr)
  246. xb = np.array(xb)
  247. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  248. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  249. \'\'\'
  250. 定义变量
  251. \'\'\'
  252. #学习率
  253. learning_rate = 1e-4
  254. #输入层节点个数
  255. n_input = 2
  256. #隐藏层节点个数
  257. n_hidden = 200 #设置为2则会欠拟合
  258. #输出节点数
  259. n_label = 1
  260. #规范化参数
  261. lamda = 1.6
  262. input_x = tf.placeholder(tf.float32,[None,n_input])
  263. input_y = tf.placeholder(tf.float32,[None,n_label])
  264. \'\'\'
  265. 定义学习参数
  266. h1 代表隐藏层
  267. h2 代表输出层
  268. \'\'\'
  269. weights = {
  270. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  271. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  272. }
  273. biases = {
  274. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  275. \'h2\':tf.Variable(tf.zeros([n_label]))
  276. }
  277. \'\'\'
  278. 定义网络模型
  279. \'\'\'
  280. #隐藏层
  281. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  282. #代价函数
  283. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights[\'h2\']),biases[\'h2\']))
  284. loss = tf.reduce_mean(tf.square(y_pred - input_y)) + lamda * tf.nn.l2_loss(weights[\'h1\'])/ num_samples + tf.nn.l2_loss(weights[\'h2\']) *lamda/ num_samples
  285. train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
  286. \'\'\'
  287. 开始训练
  288. \'\'\'
  289. training_epochs = 30000
  290. sess = tf.InteractiveSession()
  291. #初始化
  292. sess.run(tf.global_variables_initializer())
  293. for epoch in range(training_epochs):
  294. _,lo = sess.run([train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1])})
  295. if epoch % 1000 == 0:
  296. print(\'Epoch {0} loss {1}\'.format(epoch,lo))
  297. \'\'\'
  298. 数据可视化
  299. \'\'\'
  300. nb_of_xs = 200
  301. xs1 = np.linspace(-1,8,num = nb_of_xs)
  302. xs2 = np.linspace(-1,8,num = nb_of_xs)
  303. #创建网格
  304. xx,yy = np.meshgrid(xs1,xs2)
  305. #初始化和填充 classfication plane
  306. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  307. for i in range(nb_of_xs):
  308. for j in range(nb_of_xs):
  309. #计算每个输入样本对应的分类标签
  310. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  311. #创建 color map用于显示
  312. cmap = ListedColormap([
  313. colorConverter.to_rgba(\'r\',alpha = 0.30),
  314. colorConverter.to_rgba(\'b\',alpha = 0.30),
  315. ])
  316. #显示各个样本边界
  317. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  318. plt.show()
  319. \'\'\'
  320. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  321. \'\'\'
  322. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  323. #转换为二种类别
  324. test_y = test_y % 2
  325. xr = []
  326. xb = []
  327. for (l,k) in zip(test_y[:],test_x[:]):
  328. if l == 0.0:
  329. xr.append([k[0],k[1]])
  330. else:
  331. xb.append([k[0],k[1]])
  332. xr = np.array(xr)
  333. xb = np.array(xb)
  334. plt.figure()
  335. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  336. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  337. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1])})
  338. print(\'Test data loss {0}\'.format(lo))
  339. nb_of_xs = 200
  340. xs1 = np.linspace(-1,8,num = nb_of_xs)
  341. xs2 = np.linspace(-1,8,num = nb_of_xs)
  342. #创建网格
  343. xx,yy = np.meshgrid(xs1,xs2)
  344. #初始化和填充 classfication plane
  345. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  346. for i in range(nb_of_xs):
  347. for j in range(nb_of_xs):
  348. #计算每个输入样本对应的分类标签
  349. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  350. #创建 color map用于显示
  351. cmap = ListedColormap([
  352. colorConverter.to_rgba(\'r\',alpha = 0.30),
  353. colorConverter.to_rgba(\'b\',alpha = 0.30),
  354. ])
  355. #显示各个样本边界
  356. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  357. plt.show()
  358. def example_add_trainset():
  359. \'\'\'
  360. 通过增加训练集解过拟合
  361. \'\'\'
  362. \'\'\'
  363. 生成随机数据
  364. \'\'\'
  365. np.random.seed(10)
  366. #特征个数
  367. num_features = 2
  368. #样本个数
  369. num_samples = 1000
  370. #n返回长度为特征的数组 正太分布
  371. mean = np.random.randn(num_features)
  372. print(\'mean\',mean)
  373. cov = np.eye(num_features)
  374. print(\'cov\',cov)
  375. \'\'\'
  376. 定义变量
  377. \'\'\'
  378. #学习率
  379. learning_rate = 1e-4
  380. #输入层节点个数
  381. n_input = 2
  382. #隐藏层节点个数
  383. n_hidden = 200 #设置为2则会欠拟合
  384. #输出节点数
  385. n_label = 1
  386. input_x = tf.placeholder(tf.float32,[None,n_input])
  387. input_y = tf.placeholder(tf.float32,[None,n_label])
  388. \'\'\'
  389. 定义学习参数
  390. h1 代表隐藏层
  391. h2 代表输出层
  392. \'\'\'
  393. weights = {
  394. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  395. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  396. }
  397. biases = {
  398. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  399. \'h2\':tf.Variable(tf.zeros([n_label]))
  400. }
  401. \'\'\'
  402. 定义网络模型
  403. \'\'\'
  404. #隐藏层
  405. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  406. #代价函数
  407. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights[\'h2\']),biases[\'h2\']))
  408. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  409. train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
  410. \'\'\'
  411. 开始训练
  412. \'\'\'
  413. training_epochs = 30000
  414. sess = tf.InteractiveSession()
  415. #初始化
  416. sess.run(tf.global_variables_initializer())
  417. for epoch in range(training_epochs):
  418. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  419. #转换为二种类别
  420. train_y = train_y % 2
  421. _,lo = sess.run([train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1])})
  422. if epoch % 1000 == 0:
  423. print(\'Epoch {0} loss {1}\'.format(epoch,lo))
  424. \'\'\'
  425. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  426. \'\'\'
  427. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  428. #转换为二种类别
  429. test_y = test_y % 2
  430. xr = []
  431. xb = []
  432. for (l,k) in zip(test_y[:],test_x[:]):
  433. if l == 0.0:
  434. xr.append([k[0],k[1]])
  435. else:
  436. xb.append([k[0],k[1]])
  437. xr = np.array(xr)
  438. xb = np.array(xb)
  439. plt.figure()
  440. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  441. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  442. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1])})
  443. print(\'Test data loss {0}\'.format(lo))
  444. nb_of_xs = 200
  445. xs1 = np.linspace(-1,8,num = nb_of_xs)
  446. xs2 = np.linspace(-1,8,num = nb_of_xs)
  447. #创建网格
  448. xx,yy = np.meshgrid(xs1,xs2)
  449. #初始化和填充 classfication plane
  450. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  451. for i in range(nb_of_xs):
  452. for j in range(nb_of_xs):
  453. #计算每个输入样本对应的分类标签
  454. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]]})
  455. #创建 color map用于显示
  456. cmap = ListedColormap([
  457. colorConverter.to_rgba(\'r\',alpha = 0.30),
  458. colorConverter.to_rgba(\'b\',alpha = 0.30),
  459. ])
  460. #显示各个样本边界
  461. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  462. plt.show()
  463. def example_dropout():
  464. \'\'\'
  465. 使用弃权解过拟合
  466. \'\'\'
  467. \'\'\'
  468. 生成随机数据
  469. \'\'\'
  470. np.random.seed(10)
  471. #特征个数
  472. num_features = 2
  473. #样本个数
  474. num_samples = 320
  475. #n返回长度为特征的数组 正太分布
  476. mean = np.random.randn(num_features)
  477. print(\'mean\',mean)
  478. cov = np.eye(num_features)
  479. print(\'cov\',cov)
  480. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  481. #转换为二种类别
  482. train_y = train_y % 2
  483. xr = []
  484. xb = []
  485. for (l,k) in zip(train_y[:],train_x[:]):
  486. if l == 0.0:
  487. xr.append([k[0],k[1]])
  488. else:
  489. xb.append([k[0],k[1]])
  490. xr = np.array(xr)
  491. xb = np.array(xb)
  492. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  493. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  494. \'\'\'
  495. 定义变量
  496. \'\'\'
  497. #学习率
  498. learning_rate = 1e-4
  499. #输入层节点个数
  500. n_input = 2
  501. #隐藏层节点个数
  502. n_hidden = 200 #设置为2则会欠拟合
  503. #输出节点数
  504. n_label = 1
  505. input_x = tf.placeholder(tf.float32,[None,n_input])
  506. input_y = tf.placeholder(tf.float32,[None,n_label])
  507. \'\'\'
  508. 定义学习参数
  509. h1 代表隐藏层
  510. h2 代表输出层
  511. \'\'\'
  512. weights = {
  513. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  514. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  515. }
  516. biases = {
  517. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  518. \'h2\':tf.Variable(tf.zeros([n_label]))
  519. }
  520. \'\'\'
  521. 定义网络模型
  522. \'\'\'
  523. #隐藏层
  524. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  525. keep_prob = tf.placeholder(dtype=tf.float32)
  526. layer_1_drop = tf.nn.dropout(layer_1,keep_prob)
  527. #代价函数
  528. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1_drop, weights[\'h2\']),biases[\'h2\']))
  529. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  530. train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
  531. \'\'\'
  532. 开始训练
  533. \'\'\'
  534. training_epochs = 30000
  535. sess = tf.InteractiveSession()
  536. #初始化
  537. sess.run(tf.global_variables_initializer())
  538. for epoch in range(training_epochs):
  539. _,lo = sess.run([train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1]),keep_prob:0.5})
  540. if epoch % 1000 == 0:
  541. print(\'Epoch {0} loss {1}\'.format(epoch,lo))
  542. \'\'\'
  543. 数据可视化
  544. \'\'\'
  545. nb_of_xs = 200
  546. xs1 = np.linspace(-1,8,num = nb_of_xs)
  547. xs2 = np.linspace(-1,8,num = nb_of_xs)
  548. #创建网格
  549. xx,yy = np.meshgrid(xs1,xs2)
  550. #初始化和填充 classfication plane
  551. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  552. for i in range(nb_of_xs):
  553. for j in range(nb_of_xs):
  554. #计算每个输入样本对应的分类标签
  555. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]],keep_prob:1.0})
  556. #创建 color map用于显示
  557. cmap = ListedColormap([
  558. colorConverter.to_rgba(\'r\',alpha = 0.30),
  559. colorConverter.to_rgba(\'b\',alpha = 0.30),
  560. ])
  561. #显示各个样本边界
  562. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  563. plt.show()
  564. \'\'\'
  565. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  566. \'\'\'
  567. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  568. #转换为二种类别
  569. test_y = test_y % 2
  570. xr = []
  571. xb = []
  572. for (l,k) in zip(test_y[:],test_x[:]):
  573. if l == 0.0:
  574. xr.append([k[0],k[1]])
  575. else:
  576. xb.append([k[0],k[1]])
  577. xr = np.array(xr)
  578. xb = np.array(xb)
  579. plt.figure()
  580. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  581. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  582. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1]),keep_prob:1.0})
  583. print(\'Test data loss {0}\'.format(lo))
  584. nb_of_xs = 200
  585. xs1 = np.linspace(-1,8,num = nb_of_xs)
  586. xs2 = np.linspace(-1,8,num = nb_of_xs)
  587. #创建网格
  588. xx,yy = np.meshgrid(xs1,xs2)
  589. #初始化和填充 classfication plane
  590. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  591. for i in range(nb_of_xs):
  592. for j in range(nb_of_xs):
  593. #计算每个输入样本对应的分类标签
  594. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]],keep_prob:1.0})
  595. #创建 color map用于显示
  596. cmap = ListedColormap([
  597. colorConverter.to_rgba(\'r\',alpha = 0.30),
  598. colorConverter.to_rgba(\'b\',alpha = 0.30),
  599. ])
  600. #显示各个样本边界
  601. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  602. plt.show()
  603. def example_dropout_learningrate_decay():
  604. \'\'\'
  605. 使用弃权解过拟合 并使用退化学习率进行加速学习
  606. \'\'\'
  607. \'\'\'
  608. 生成随机数据
  609. \'\'\'
  610. np.random.seed(10)
  611. #特征个数
  612. num_features = 2
  613. #样本个数
  614. num_samples = 320
  615. #n返回长度为特征的数组 正太分布
  616. mean = np.random.randn(num_features)
  617. print(\'mean\',mean)
  618. cov = np.eye(num_features)
  619. print(\'cov\',cov)
  620. train_x,train_y = generate(num_samples,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  621. #转换为二种类别
  622. train_y = train_y % 2
  623. xr = []
  624. xb = []
  625. for (l,k) in zip(train_y[:],train_x[:]):
  626. if l == 0.0:
  627. xr.append([k[0],k[1]])
  628. else:
  629. xb.append([k[0],k[1]])
  630. xr = np.array(xr)
  631. xb = np.array(xb)
  632. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  633. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  634. \'\'\'
  635. 定义变量
  636. \'\'\'
  637. #学习率
  638. learning_rate = 1e-4
  639. #输入层节点个数
  640. n_input = 2
  641. #隐藏层节点个数
  642. n_hidden = 200 #设置为2则会欠拟合
  643. #输出节点数
  644. n_label = 1
  645. input_x = tf.placeholder(tf.float32,[None,n_input])
  646. input_y = tf.placeholder(tf.float32,[None,n_label])
  647. \'\'\'
  648. 定义学习参数
  649. h1 代表隐藏层
  650. h2 代表输出层
  651. \'\'\'
  652. weights = {
  653. \'h1\':tf.Variable(tf.truncated_normal(shape=[n_input,n_hidden],stddev = 0.01)), #方差0.1
  654. \'h2\':tf.Variable(tf.truncated_normal(shape=[n_hidden,n_label],stddev=0.01))
  655. }
  656. biases = {
  657. \'h1\':tf.Variable(tf.zeros([n_hidden])),
  658. \'h2\':tf.Variable(tf.zeros([n_label]))
  659. }
  660. \'\'\'
  661. 定义网络模型
  662. \'\'\'
  663. #隐藏层
  664. layer_1 = tf.nn.relu(tf.add(tf.matmul(input_x,weights[\'h1\']),biases[\'h1\']))
  665. keep_prob = tf.placeholder(dtype=tf.float32)
  666. layer_1_drop = tf.nn.dropout(layer_1,keep_prob)
  667. #代价函数
  668. y_pred = tf.nn.sigmoid(tf.add(tf.matmul(layer_1_drop, weights[\'h2\']),biases[\'h2\']))
  669. loss = tf.reduce_mean(tf.square(y_pred - input_y))
  670. global_step = tf.Variable(0,trainable=False)
  671. decaylearning_rate = tf.train.exponential_decay(learning_rate,global_step,1000,0.9)
  672. train = tf.train.AdamOptimizer(decaylearning_rate).minimize(loss,global_step = global_step)
  673. \'\'\'
  674. 开始训练
  675. \'\'\'
  676. training_epochs = 30000
  677. sess = tf.InteractiveSession()
  678. #初始化
  679. sess.run(tf.global_variables_initializer())
  680. for epoch in range(training_epochs):
  681. #执行一次train global_step变量会自加1
  682. rate,_,lo = sess.run([decaylearning_rate,train,loss],feed_dict={input_x:train_x,input_y:np.reshape(train_y,[-1,1]),keep_prob:0.5})
  683. if epoch % 1000 == 0:
  684. print(\'Epoch {0} learning_rate {1} loss {2} \'.format(epoch,rate,lo))
  685. \'\'\'
  686. 数据可视化
  687. \'\'\'
  688. nb_of_xs = 200
  689. xs1 = np.linspace(-1,8,num = nb_of_xs)
  690. xs2 = np.linspace(-1,8,num = nb_of_xs)
  691. #创建网格
  692. xx,yy = np.meshgrid(xs1,xs2)
  693. #初始化和填充 classfication plane
  694. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  695. for i in range(nb_of_xs):
  696. for j in range(nb_of_xs):
  697. #计算每个输入样本对应的分类标签
  698. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]],keep_prob:1.0})
  699. #创建 color map用于显示
  700. cmap = ListedColormap([
  701. colorConverter.to_rgba(\'r\',alpha = 0.30),
  702. colorConverter.to_rgba(\'b\',alpha = 0.30),
  703. ])
  704. #显示各个样本边界
  705. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  706. plt.show()
  707. \'\'\'
  708. 测试 可以看到测试集loss值和训练集loss差距较大 这是因为模型过拟合了
  709. \'\'\'
  710. test_x,test_y = generate(12,mean,cov,[[3.0,0.0],[3.0,3.0],[0.0,3.0]],num_classes=4)
  711. #转换为二种类别
  712. test_y = test_y % 2
  713. xr = []
  714. xb = []
  715. for (l,k) in zip(test_y[:],test_x[:]):
  716. if l == 0.0:
  717. xr.append([k[0],k[1]])
  718. else:
  719. xb.append([k[0],k[1]])
  720. xr = np.array(xr)
  721. xb = np.array(xb)
  722. plt.figure()
  723. plt.scatter(xr[:,0],xr[:,1],c=\'r\',marker=\'+\')
  724. plt.scatter(xb[:,0],xb[:,1],c=\'b\',marker=\'o\')
  725. lo = sess.run(loss,feed_dict={input_x:test_x,input_y:np.reshape(test_y,[-1,1]),keep_prob:1.0})
  726. print(\'Test data loss {0}\'.format(lo))
  727. nb_of_xs = 200
  728. xs1 = np.linspace(-1,8,num = nb_of_xs)
  729. xs2 = np.linspace(-1,8,num = nb_of_xs)
  730. #创建网格
  731. xx,yy = np.meshgrid(xs1,xs2)
  732. #初始化和填充 classfication plane
  733. classfication_plane = np.zeros([nb_of_xs,nb_of_xs])
  734. for i in range(nb_of_xs):
  735. for j in range(nb_of_xs):
  736. #计算每个输入样本对应的分类标签
  737. classfication_plane[i,j] = sess.run(y_pred,feed_dict={input_x:[[xx[i,j],yy[i,j]]],keep_prob:1.0})
  738. #创建 color map用于显示
  739. cmap = ListedColormap([
  740. colorConverter.to_rgba(\'r\',alpha = 0.30),
  741. colorConverter.to_rgba(\'b\',alpha = 0.30),
  742. ])
  743. #显示各个样本边界
  744. plt.contourf(xx,yy,classfication_plane,cmap = cmap)
  745. plt.show()
  746. if __name__== \'__main__\':
  747. #example_overfit()
  748. #example_l2_norm()
  749. #example_add_trainset()
  750. #example_dropout()
  751. example_dropout_learningrate_decay()

View Code

版权声明:本文为zyly原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/zyly/p/8952384.html