Tensorflow构建CNN怎样做,有哪些要点
Admin 2022-09-16 群英技术资讯 294 次浏览
卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习(deep learning)的代表算法之一。
其主要结构分为输入层、隐含层、输出层。
在tensorboard中,其结构如图所示:
对于卷积神经网络而言,其输入层、输出层与平常的卷积神经网络无异。
但其隐含层可以分为三个部分,分别是卷积层(对输入数据进行特征提取)、池化层(特征选择和信息过滤)、全连接层(等价于传统前馈神经网络中的隐含层)。
卷积将输入图像放进一组卷积滤波器,每个滤波器激活图像中的某些特征。
假设一副黑白图像为5*5的大小,像这样:
利用如下卷积器进行卷积:
卷积结果为:
卷积过程可以提取特征,卷积神经网络是根据特征来完成分类的。
在tensorflow中,卷积层的重要函数是:
tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None)
其中:
1、input是输入量,shape是[batch, height, width, channels]。;
2、filter是使用的卷积核;
3、strides是步长,其格式[1,step,step,1],step指的是在图像卷积的每一维的步长;
4、padding:string类型的量,只能是"SAME","VALID"其中之一,SAME表示卷积前后图像面积不变。
池化层用于在卷积层进行特征提取后,输出的特征图会被传递至池化层进行特征选择和信息过滤。
常见的池化是最大池化,最大池化指的是取出这些被卷积后的数据的最大值,就是取出其最大特征。
假设其池化窗口为2X2,步长为2。
原图像为:
池化后为:
在tensorflow中,池化层的重要函数是:
tf.nn.max_pool(value, ksize, strides, padding, data_format, name)
1、value:池化层的输入,一般池化层接在卷积层后面,shape是[batch, height, width, channels]。
2、ksize:池化窗口的大小,取一个四维向量,一般是[1, in_height, in_width, 1]。
3、strides:和卷积类似,窗口在每一个维度上滑动的步长,也是[1, stride,stride, 1]。
4、padding:和卷积类似,可以取’VALID’ 或者’SAME’。
这是tensorboard中卷积层和池化层的连接结构:
全连接层与普通神经网络的结构相同,如图所示:
def conv2d(x,W,step,pad): #用于进行卷积,x为输入值,w为卷积核 return tf.nn.conv2d(x,W,strides = [1,step,step,1],padding = pad) def max_pool_2X2(x,step,pad): #用于池化,x为输入值,step为步数 return tf.nn.max_pool(x,ksize = [1,2,2,1],strides= [1,step,step,1],padding = pad) def weight_variable(shape): #用于获得W initial = tf.truncated_normal(shape,stddev = 0.1) #从截断的正态分布中输出随机值 return tf.Variable(initial) def bias_variable(shape): #获得bias initial = tf.constant(0.1,shape=shape) #生成普通值 return tf.Variable(initial) def add_layer(inputs,in_size,out_size,n_layer,activation_function = None,keep_prob = 1): #用于添加全连接层 layer_name = 'layer_%s'%n_layer with tf.name_scope(layer_name): with tf.name_scope("Weights"): Weights = tf.Variable(tf.truncated_normal([in_size,out_size],stddev = 0.1),name = "Weights") tf.summary.histogram(layer_name+"/weights",Weights) with tf.name_scope("biases"): biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name = "biases") tf.summary.histogram(layer_name+"/biases",biases) with tf.name_scope("Wx_plus_b"): Wx_plus_b = tf.matmul(inputs,Weights) + biases tf.summary.histogram(layer_name+"/Wx_plus_b",Wx_plus_b) if activation_function == None : outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) print(activation_function) outputs = tf.nn.dropout(outputs,keep_prob) tf.summary.histogram(layer_name+"/outputs",outputs) return outputs def add_cnn_layer(inputs, in_z_dim, out_z_dim, n_layer, conv_step = 1, pool_step = 2, padding = "SAME"): #用于生成卷积层和池化层 layer_name = 'layer_%s'%n_layer with tf.name_scope(layer_name): with tf.name_scope("Weights"): W_conv = weight_variable([5,5,in_z_dim,out_z_dim]) with tf.name_scope("biases"): b_conv = bias_variable([out_z_dim]) with tf.name_scope("conv"): #卷积层 h_conv = tf.nn.relu(conv2d(inputs, W_conv, conv_step, padding)+b_conv) with tf.name_scope("pooling"): #池化层 h_pool = max_pool_2X2(h_conv, pool_step, padding) return h_pool
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data",one_hot = "true") def conv2d(x,W,step,pad): return tf.nn.conv2d(x,W,strides = [1,step,step,1],padding = pad) def max_pool_2X2(x,step,pad): return tf.nn.max_pool(x,ksize = [1,2,2,1],strides= [1,step,step,1],padding = pad) def weight_variable(shape): initial = tf.truncated_normal(shape,stddev = 0.1) #从截断的正态分布中输出随机值 return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1,shape=shape) #生成普通值 return tf.Variable(initial) def add_layer(inputs,in_size,out_size,n_layer,activation_function = None,keep_prob = 1): layer_name = 'layer_%s'%n_layer with tf.name_scope(layer_name): with tf.name_scope("Weights"): Weights = tf.Variable(tf.truncated_normal([in_size,out_size],stddev = 0.1),name = "Weights") tf.summary.histogram(layer_name+"/weights",Weights) with tf.name_scope("biases"): biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name = "biases") tf.summary.histogram(layer_name+"/biases",biases) with tf.name_scope("Wx_plus_b"): Wx_plus_b = tf.matmul(inputs,Weights) + biases tf.summary.histogram(layer_name+"/Wx_plus_b",Wx_plus_b) if activation_function == None : outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) print(activation_function) outputs = tf.nn.dropout(outputs,keep_prob) tf.summary.histogram(layer_name+"/outputs",outputs) return outputs def add_cnn_layer(inputs, in_z_dim, out_z_dim, n_layer, conv_step = 1, pool_step = 2, padding = "SAME"): layer_name = 'layer_%s'%n_layer with tf.name_scope(layer_name): with tf.name_scope("Weights"): W_conv = weight_variable([5,5,in_z_dim,out_z_dim]) with tf.name_scope("biases"): b_conv = bias_variable([out_z_dim]) with tf.name_scope("conv"): h_conv = tf.nn.relu(conv2d(inputs, W_conv, conv_step, padding)+b_conv) with tf.name_scope("pooling"): h_pool = max_pool_2X2(h_conv, pool_step, padding) return h_pool def compute_accuracy(x_data,y_data): global prediction y_pre = sess.run(prediction,feed_dict={xs:x_data,keep_prob:1}) correct_prediction = tf.equal(tf.arg_max(y_data,1),tf.arg_max(y_pre,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) result = sess.run(accuracy,feed_dict = {xs:batch_xs,ys:batch_ys,keep_prob:1}) return result keep_prob = tf.placeholder(tf.float32) xs = tf.placeholder(tf.float32,[None,784]) ys = tf.placeholder(tf.float32,[None,10]) x_image = tf.reshape(xs,[-1,28,28,1]) h_pool1 = add_cnn_layer(x_image, in_z_dim = 1, out_z_dim = 32, n_layer = "cnn1",) h_pool2 = add_cnn_layer(h_pool1, in_z_dim = 32, out_z_dim = 64, n_layer = "cnn2",) h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64]) h_fc1_drop = add_layer(h_pool2_flat, 7*7*64, 1024, "layer1", activation_function = tf.nn.relu, keep_prob = keep_prob) prediction = add_layer(h_fc1_drop, 1024, 10, "layer2", activation_function = tf.nn.softmax, keep_prob = 1) with tf.name_scope("loss"): loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ys,logits = prediction),name = 'loss') tf.summary.scalar("loss",loss) train = tf.train.AdamOptimizer(1e-4).minimize(loss) init = tf.initialize_all_variables() merged = tf.summary.merge_all() with tf.Session() as sess: sess.run(init) write = tf.summary.FileWriter("logs/",sess.graph) for i in range(5000): batch_xs,batch_ys = mnist.train.next_batch(100) sess.run(train,feed_dict = {xs:batch_xs,ys:batch_ys,keep_prob:0.5}) if i % 100 == 0: print(compute_accuracy(mnist.test.images,mnist.test.labels))
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:mmqy2019@163.com进行举报,并提供相关证据,查实之后,将立刻删除涉嫌侵权内容。
猜你喜欢
这篇文章主要介绍了Python函数使用的相关练习题分享,文章基于python函数内容展开其相关例题,具有一定的参考价值,需要的小伙伴可以参考一下
这篇文章主要为大家详细介绍了用python实现五子棋实例,文中示例代码介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们可以参考一下
这篇文章主要为大家介绍了python区块链实现简版网络的详解,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪
Python利用PaddleOCR制作个搜题小工具,下文有实例供大家参考,对大家了解操作过程或相关知识有一定的帮助,而且实用性强,希望这篇文章能帮助大家,下面我们一起来了解看看吧。
Python3内置函数--all() 函数:如果 iterable 的所有元素为真(或迭代器为空),返回 True
成为群英会员,开启智能安全云计算之旅
立即注册Copyright © QY Network Company Ltd. All Rights Reserved. 2003-2020 群英 版权所有
增值电信经营许可证 : B1.B2-20140078 粤ICP备09006778号 域名注册商资质 粤 D3.1-20240008