模塊:tf.nn
定義在:tensorflow/tools/api/generator/api/nn/__init__.py.
Python API的導(dǎo)入.
這個(gè)文件是機(jī)器生成的!不要編輯.生成者:tensorflow/tools/api/generator/create_python_api.py腳本.
模塊
Activation Functions(激活函數(shù))
- tf.nn.relu(features, name=None) #max(features, 0)
- tf.nn.relu6(features, name=None) #min(max(features, 0), 6)
- tf.nn.softplus(features, name=None) #log(exp(features) + 1)
- tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None) #計(jì)算dropout
- tf.nn.bias_add(value, bias, name=None) #加偏置
- tf.sigmoid(x, name=None) # 1/(1+exp(-x))
- tf.tanh(x, name=None) #雙曲正切曲線 (exp(x)-exp(-x))/(exp(x)+exp(-x))
Convolution(卷積運(yùn)算)
- tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None) #4D input
- tf.nn.depthwise_conv2d(input, filter, strides, padding, name=None) #5D input
- tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, name=None) #執(zhí)行一個(gè)深度卷積,分別作用于通道上,然后執(zhí)行一個(gè)混合通道的點(diǎn)卷積
Pooling(池化)
- tf.nn.avg_pool(value, ksize, strides, padding, name=None) #平均值池化
- tf.nn.max_pool(value, ksize, strides, padding, name=None) #最大值池化
- tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None) #放回最大值和扁平索引
Normalization(標(biāo)準(zhǔn)化)
- tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) #L2范式標(biāo)準(zhǔn)化
- tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None) #計(jì)算局部數(shù)據(jù)標(biāo)準(zhǔn)化,每個(gè)元素被獨(dú)立標(biāo)準(zhǔn)化
- tf.nn.moments(x, axes, name=None) #平均值和方差
Losses(損失)
- tf.nn.l2_loss(t,name=None) #sum(t^2)/2
Classification(分類(lèi))
- tf.nn.sigmoid_cross_entropy_with_logits(logits, targets, name=None) #交叉熵
- tf.nn.softmax(logits, name=None) #softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))
- tf.nn.log_softmax(logits, name=None) #logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))
- tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) #計(jì)算logits和labels的softmax交叉熵
RNN
- tf.nn.rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None) #基于RNNCell類(lèi)的實(shí)例cell建立循環(huán)神經(jīng)網(wǎng)絡(luò)
- tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None) #基于RNNCell類(lèi)的實(shí)例cell建立動(dòng)態(tài)循環(huán)神經(jīng)網(wǎng)絡(luò)與一般rnn不同的是,該函數(shù)會(huì)根據(jù)輸入動(dòng)態(tài)展開(kāi)返回(outputs,state)
- tf.nn.state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None) #可儲(chǔ)存調(diào)試狀態(tài)的RNN網(wǎng)絡(luò)
- tf.nn.bidirectional_rnn(cell_fw, cell_bw, inputs,initial_state_fw=None, initial_state_bw=None, dtype=None,sequence_length=None, scope=None) #雙向RNN, 返回一個(gè)3元組tuple (outputs, output_state_fw, output_state_bw)
更多建議: