讓我們將模型從 MPL 修改為卷積神經(jīng)網(wǎng)絡(luò)(CNN),解決我們早期的數(shù)字識(shí)別問題。
CNN可以表示如下:
Conv2D
由 32
個(gè)過濾器和內(nèi)核大小為 (3,3)
的“relu”
激活函數(shù)組成。Conv2D
由 64
個(gè)過濾器和內(nèi)核大小為 (3,3)
的“relu”
激活函數(shù)組成。MaxPooling
的池大小為 (2, 2)
。Flatten
用于將其所有輸入展平為單一維度。Dense
由 128
個(gè)神經(jīng)元和“relu”
激活函數(shù)組成。Dropout
的值為 0.5
。10
個(gè)神經(jīng)元和“softmax”
激活函數(shù)組成。categorical_crossentropy
作為損失函數(shù)。Adadelta()
作為優(yōu)化器。accuracy
作為指標(biāo)。128
作為批量大小。20
作為紀(jì)元。導(dǎo)入必要的模塊。
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import numpy as np
導(dǎo)入 mnist 數(shù)據(jù)集。
(x_train, y_train), (x_test, y_test) = mnist.load_data()
根據(jù)我們的模型更改數(shù)據(jù)集,以便將其輸入到我們的模型中。
img_rows, img_cols = 28, 28
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
除了輸入數(shù)據(jù)的形狀和圖像格式配置之外,數(shù)據(jù)處理類似于 MPL 模型。
創(chuàng)建實(shí)際模型。
model = Sequential()
model.add(Conv2D(32, kernel_size = (3, 3),
activation = 'relu', input_shape = input_shape))
model.add(Conv2D(64, (3, 3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Dropout(0.25)) model.add(Flatten())
model.add(Dense(128, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'softmax'))
使用選定的損失函數(shù)、優(yōu)化器和指標(biāo)來編譯模型。
model.compile(loss = keras.losses.categorical_crossentropy,
optimizer = keras.optimizers.Adadelta(), metrics = ['accuracy'])
使用fit()方法訓(xùn)練模型。
model.fit(
x_train, y_train,
batch_size = 128,
epochs = 12,
verbose = 1,
validation_data = (x_test, y_test)
)
執(zhí)行應(yīng)用程序?qū)⑤敵鲆韵滦畔?
Train on 60000 samples, validate on 10000 samples Epoch 1/12
60000/60000 [==============================] - 84s 1ms/step - loss: 0.2687
- acc: 0.9173 - val_loss: 0.0549 - val_acc: 0.9827 Epoch 2/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0899
- acc: 0.9737 - val_loss: 0.0452 - val_acc: 0.9845 Epoch 3/12
60000/60000 [==============================] - 83s 1ms/step - loss: 0.0666
- acc: 0.9804 - val_loss: 0.0362 - val_acc: 0.9879 Epoch 4/12
60000/60000 [==============================] - 81s 1ms/step - loss: 0.0564
- acc: 0.9830 - val_loss: 0.0336 - val_acc: 0.9890 Epoch 5/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0472
- acc: 0.9861 - val_loss: 0.0312 - val_acc: 0.9901 Epoch 6/12
60000/60000 [==============================] - 83s 1ms/step - loss: 0.0414
- acc: 0.9877 - val_loss: 0.0306 - val_acc: 0.9902 Epoch 7/12
60000/60000 [==============================] - 89s 1ms/step - loss: 0.0375
-acc: 0.9883 - val_loss: 0.0281 - val_acc: 0.9906 Epoch 8/12
60000/60000 [==============================] - 91s 2ms/step - loss: 0.0339
- acc: 0.9893 - val_loss: 0.0280 - val_acc: 0.9912 Epoch 9/12
60000/60000 [==============================] - 89s 1ms/step - loss: 0.0325
- acc: 0.9901 - val_loss: 0.0260 - val_acc: 0.9909 Epoch 10/12
60000/60000 [==============================] - 89s 1ms/step - loss: 0.0284
- acc: 0.9910 - val_loss: 0.0250 - val_acc: 0.9919 Epoch 11/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0287
- acc: 0.9907 - val_loss: 0.0264 - val_acc: 0.9916 Epoch 12/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0265
- acc: 0.9920 - val_loss: 0.0249 - val_acc: 0.9922
讓我們使用測試數(shù)據(jù)評(píng)估模型。
score = model.evaluate(x_test, y_test, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
執(zhí)行上述代碼將輸出以下信息:
Test loss: 0.024936060590433316
Test accuracy: 0.9922
測試準(zhǔn)確率為99.22%。我們創(chuàng)建了一個(gè)最佳模型來識(shí)別手寫數(shù)字。
最后,從圖像中預(yù)測數(shù)字如下:
pred = model.predict(x_test)
pred = np.argmax(pred, axis = 1)[:5]
label = np.argmax(y_test,axis = 1)[:5]
print(pred)
print(label)
上述應(yīng)用程序的輸出如下:
[7 2 1 0 4]
[7 2 1 0 4]
兩個(gè)數(shù)組的輸出是相同的,這表明我們的模型正確預(yù)測了前五個(gè)圖像。
更多建議: