mnist是一個簡單的計(jì)算機(jī)視覺數(shù)據(jù)集,它包含了各種手寫數(shù)字圖片。是很多學(xué)習(xí)機(jī)器學(xué)習(xí)的初學(xué)者第一個接觸到的數(shù)據(jù)集。但是有一部分的小伙伴反應(yīng)Keras在mnist數(shù)據(jù)集載入的時候會出現(xiàn)報(bào)錯的問題,這里小編就這一問題進(jìn)行一個解決方案的介紹:
1.找到本地keras目錄下的mnist.py文件,目錄:
F:python_enter_anaconda510Libsite-packages ensorflowpythonkerasdatasets
2.下載mnist.npz文件到本地,下載地址:
https://s3.amazonaws.com/img-datasets/mnist.npz
3.修改mnist.py文件為以下內(nèi)容,并保存
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ..utils.data_utils import get_file
import numpy as np
def load_data(path='mnist.npz'):
"""Loads the MNIST dataset.
# Arguments
path: path where to cache the dataset locally
(relative to ~/.keras/datasets).
# Returns
Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
"""
path = 'E:/Data/Mnist/mnist.npz' #此處的path為你剛剛防止mnist.py的目錄。注意斜杠
f = np.load(path)
x_train, y_train = f['x_train'], f['y_train']
x_test, y_test = f['x_test'], f['y_test']
f.close()
return (x_train, y_train), (x_test, y_test)
補(bǔ)充:Keras MNIST 手寫數(shù)字識別數(shù)據(jù)集
下載 MNIST 數(shù)據(jù)
1 導(dǎo)入相關(guān)的模塊
import keras
import numpy as np
from keras.utils import np_utils
import os
from keras.datasets import mnist
2 第一次進(jìn)行Mnist 數(shù)據(jù)的下載
(X_train_image ,y_train_image),(X_test_image,y_test_image) = mnist.load_data()
第一次執(zhí)行 mnist.load_data() 方法 ,程序會檢查用戶目錄下是否已經(jīng)存在 MNIST 數(shù)據(jù)集文件 ,如果沒有,就會自動下載 . (所以第一次運(yùn)行比較慢) .
3 查看已經(jīng)下載的MNIST 數(shù)據(jù)文件
4 查看MNIST數(shù)據(jù)
print('train data = ' ,len(X_train_image)) #
print('test data = ',len(X_test_image))
查看訓(xùn)練數(shù)據(jù)
1 訓(xùn)練集是由 images 和 label 組成的 , images 是數(shù)字的單色數(shù)字圖像 28 x 28 的 , label 是images 對應(yīng)的數(shù)字的十進(jìn)制表示 .
2 顯示數(shù)字的圖像
import matplotlib.pyplot as plt
def plot_image(image):
fig = plt.gcf()
fig.set_size_inches(2,2) # 設(shè)置圖形的大小
plt.imshow(image,cmap='binary') # 傳入圖像image ,cmap 參數(shù)設(shè)置為 binary ,以黑白灰度顯示
plt.show()
3 查看訓(xùn)練數(shù)據(jù)中的第一個數(shù)據(jù)
plot_image(x_train_image[0])
查看對應(yīng)的標(biāo)記(真實(shí)值)
print(y_train_image[0])
運(yùn)行結(jié)果 : 5
查看多項(xiàng)訓(xùn)練數(shù)據(jù) images 與 label
上面我們只顯示了一組數(shù)據(jù)的圖像 , 下面將顯示多組手寫數(shù)字的圖像展示 ,以便我們查看數(shù)據(jù) .
def plot_images_labels_prediction(images, labels,
prediction, idx, num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14) # 設(shè)置大小
if num > 25: num = 25
for i in range(0, num):
ax = plt.subplot(5, 5, 1 + i)# 分成 5 X 5 個子圖顯示, 第三個參數(shù)表示第幾個子圖
ax.imshow(images[idx], cmap='binary')
title = "label=" + str(labels[idx])
if len(prediction) > 0: # 如果有預(yù)測值
title += ",predict=" + str(prediction[idx])
ax.set_title(title, fontsize=10)
ax.set_xticks([])
ax.set_yticks([])
idx += 1
plt.show()
plot_images_labels_prediction(x_train_image,y_train_image,[],0,10)
查看測試集 的手寫數(shù)字前十個
plot_images_labels_prediction(x_test_image,y_test_image,[],0,10)
多層感知器模型數(shù)據(jù)預(yù)處理
feature (數(shù)字圖像的特征值) 數(shù)據(jù)預(yù)處理可分為兩個步驟:
(1) 將原本的 288 X28 的數(shù)字圖像以 reshape 轉(zhuǎn)換為 一維的向量 ,其長度為 784 ,并且轉(zhuǎn)換為 float
(2) 數(shù)字圖像 image 的數(shù)字標(biāo)準(zhǔn)化
1 查看image 的shape
print("x_train_image : " ,len(x_train_image) , x_train_image.shape )
print("y_train_label : ", len(y_train_label) , y_train_label.shape)
#output :
x_train_image : 60000 (60000, 28, 28)
y_train_label : 60000 (60000,)
2 將 lmage 以 reshape 轉(zhuǎn)換
# 將 image 以 reshape 轉(zhuǎn)化
x_Train = x_train_image.reshape(60000,784).astype('float32')
x_Test = x_test_image.reshape(10000,784).astype('float32')
print('x_Train : ' ,x_Train.shape)
print('x_Test' ,x_Test.shape)
3 標(biāo)準(zhǔn)化
images 的數(shù)字標(biāo)準(zhǔn)化可以提高后續(xù)訓(xùn)練模型的準(zhǔn)確率 ,因?yàn)?images 的數(shù)字 是從 0 到255 的值 ,代表圖形每一個點(diǎn)灰度的深淺 .
# 標(biāo)準(zhǔn)化
x_Test_normalize = x_Test/255
x_Train_normalize = x_Train/255
4 查看標(biāo)準(zhǔn)化后的測試集和訓(xùn)練集 image
print(x_Train_normalize[0]) # 訓(xùn)練集中的第一個數(shù)字的標(biāo)準(zhǔn)化
x_train_image : 60000 (60000, 28, 28)
y_train_label : 60000 (60000,)
[0. 0. 0. 0. 0. 0.
........................................................
0. 0. 0. 0. 0. 0.
0.
0.21568628 0.6745098 0.8862745 0.99215686 0.99215686 0.99215686
0.99215686 0.95686275 0.52156866 0.04313726 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.53333336 0.99215686
0.99215686 0.99215686 0.83137256 0.5294118 0.5176471 0.0627451
0. 0. 0. 0. ]
Label 數(shù)據(jù)的預(yù)處理
label 標(biāo)簽字段原本是 0 ~ 9 的數(shù)字 ,必須以 One -hot Encoding 獨(dú)熱編碼 轉(zhuǎn)換為 10個 0,1 組合 ,比如 7 經(jīng)過 One -hot encoding
轉(zhuǎn)換為 0000000100 ,正好就對應(yīng)了輸出層的 10 個 神經(jīng)元 .
# 將訓(xùn)練集和測試集標(biāo)簽都進(jìn)行獨(dú)熱碼轉(zhuǎn)化
y_TrainOneHot = np_utils.to_categorical(y_train_label)
y_TestOneHot = np_utils.to_categorical(y_test_label)
print(y_TrainOneHot[:5]) # 查看前5項(xiàng)的標(biāo)簽
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] 5
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] 0
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] 4
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] 1
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]] 9
Keras 多元感知器識別 MNIST 手寫數(shù)字圖像的介紹
1 我們將將建立如圖所示的多層感知器模型
2 建立model 后 ,必須先訓(xùn)練model 才能進(jìn)行預(yù)測(識別)這些手寫數(shù)字 .
數(shù)據(jù)的預(yù)處理我們已經(jīng)處理完了. 包含 數(shù)據(jù)集 輸入(數(shù)字圖像)的標(biāo)準(zhǔn)化 , label的one-hot encoding
下面我們將建立模型
我們將建立多層感知器模型 ,輸入層 共有784 個神經(jīng)元 ,hodden layer 有 256 個neure ,輸出層用 10 個神經(jīng)元 .
1 導(dǎo)入相關(guān)模塊
from keras.models import Sequential
from keras.layers import Dense
2 建立 Sequence 模型
# 建立Sequential 模型
model = Sequential()
3 建立 "輸入層" 和 "隱藏層"
使用 model,add() 方法加入 Dense 神經(jīng)網(wǎng)絡(luò)層 .
model.add(Dense(units=256,
input_dim =784,
keras_initializer='normal',
activation='relu')
)
參數(shù) | 說明 |
units =256 | 定義"隱藏層"神經(jīng)元的個數(shù)為256 |
input_dim | 設(shè)置輸入層神經(jīng)元個數(shù)為 784 |
kernel_initialize='normal' | 使用正態(tài)分布的隨機(jī)數(shù)初始化weight和bias |
activation | 激勵函數(shù)為 relu |
4 建立輸出層
model.add(Dense(
units=10,
kernel_initializer='normal',
activation='softmax'
))
參數(shù) | 說明 |
units | 定義"輸出層"神經(jīng)元個數(shù)為10 |
kernel_initializer='normal' | 同上 |
activation='softmax | 激活函數(shù) softmax |
5 查看模型的摘要
print(model.summary())
param 的計(jì)算是 上一次的神經(jīng)元個數(shù) * 本層神經(jīng)元個數(shù) + 本層神經(jīng)元個數(shù) .
進(jìn)行訓(xùn)練
1 定義訓(xùn)練方式
model.compile(loss='categorical_crossentropy' ,optimizer='adam',metrics=['accuracy'])
loss (損失函數(shù)) : 設(shè)置損失函數(shù), 這里使用的是交叉熵 .
optimizer : 優(yōu)化器的選擇,可以讓訓(xùn)練更快的收斂
metrics : 設(shè)置評估模型的方式是準(zhǔn)確率
開始訓(xùn)練 2
train_history = model.fit(x=x_Train_normalize,y=y_TrainOneHot,validation_split=0.2 ,
epoch=10,batch_size=200,verbose=2)
使用 model.fit() 進(jìn)行訓(xùn)練 , 訓(xùn)練過程會存儲在 train_history 變量中 .
(1)輸入訓(xùn)練數(shù)據(jù)參數(shù)
x = x_Train_normalize
y = y_TrainOneHot
(2)設(shè)置訓(xùn)練集和驗(yàn)證集的數(shù)據(jù)比例
validation_split=0.2 8 :2 = 訓(xùn)練集 : 驗(yàn)證集
(3) 設(shè)置訓(xùn)練周期 和 每一批次項(xiàng)數(shù)
epoch=10,batch_size=200
(4) 顯示訓(xùn)練過程
verbose = 2
3 建立show_train_history 顯示訓(xùn)練過程
def show_train_history(train_history,train,validation) :
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title("Train_history")
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train','validation'],loc='upper left')
plt.show()
測試數(shù)據(jù)評估模型準(zhǔn)確率
scores = model.evaluate(x_Test_normalize,y_TestOneHot)
print()
print('accuracy=',scores[1] )
accuracy= 0.9769
進(jìn)行預(yù)測
通過之前的步驟, 我們建立了模型, 并且完成了模型訓(xùn)練 ,準(zhǔn)確率達(dá)到可以接受的 0.97 . 接下來我們將使用此模型進(jìn)行預(yù)測.
1 執(zhí)行預(yù)測
prediction = model.predict_classes(x_Test)
print(prediction)
result : [7 2 1 ... 4 5 6]
2 顯示 10 項(xiàng)預(yù)測結(jié)果
plot_images_labels_prediction(x_test_image,y_test_label,prediction,idx=340)
我們可以看到 第一個數(shù)字 label 是 5 結(jié)果預(yù)測成 3 了.
顯示混淆矩陣
上面我們在預(yù)測到第340 個測試集中的數(shù)字5 時 ,卻被錯誤的預(yù)測成了 3 .如果想要更進(jìn)一步的知道我們所建立的模型中哪些 數(shù)字的預(yù)測準(zhǔn)確率更高 , 哪些數(shù)字會容忍混淆 .
混淆矩陣 也稱為 誤差矩陣.
1 使用Pandas 建立混淆矩陣 .
showMetrix = pd.crosstab(y_test_label,prediction,colnames=['label',],rownames=['predict'])
print(showMetrix)
label 0 1 2 3 4 5 6 7 8 9
predict
0 971 0 1 1 1 0 2 1 3 0
1 0 1124 4 0 0 1 2 0 4 0
2 5 0 1009 2 1 0 3 4 8 0
3 0 0 5 993 0 1 0 3 4 4
4 1 0 5 1 961 0 3 0 3 8
5 3 0 0 16 1 852 7 2 8 3
6 5 3 3 1 3 3 939 0 1 0
7 0 5 13 7 1 0 0 988 5 9
8 4 0 3 7 1 1 1 2 954 1
9 3 6 0 11 7 2 1 4 4 971
2 使用DataFrame
df = pd.DataFrame({'label ':y_test_label, 'predict':prediction})
print(df)
label predict
0 7 7
1 2 2
2 1 1
3 0 0
4 4 4
5 1 1
6 4 4
7 9 9
8 5 5
9 9 9
10 0 0
11 6 6
12 9 9
13 0 0
14 1 1
15 5 5
16 9 9
17 7 7
18 3 3
19 4 4
20 9 9
21 6 6
22 6 6
23 5 5
24 4 4
25 0 0
26 7 7
27 4 4
28 0 0
29 1 1
... ... ...
9970 5 5
9971 2 2
9972 4 4
9973 9 9
9974 4 4
9975 3 3
9976 6 6
9977 4 4
9978 1 1
9979 7 7
9980 2 2
9981 6 6
9982 5 6
9983 0 0
9984 1 1
9985 2 2
9986 3 3
9987 4 4
9988 5 5
9989 6 6
9990 7 7
9991 8 8
9992 9 9
9993 0 0
9994 1 1
9995 2 2
9996 3 3
9997 4 4
9998 5 5
9999 6 6
隱藏層增加為 1000個神經(jīng)元
model.add(Dense(units=1000,
input_dim=784,
kernel_initializer='normal',
activation='relu'))
hidden layer 神經(jīng)元的增大,參數(shù)也增多了, 所以訓(xùn)練model的時間也變慢了.
加入 Dropout 功能避免過度擬合
# 建立Sequential 模型
model = Sequential()
model.add(Dense(units=1000,
input_dim=784,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
model.add(Dense(units=10,
kernel_initializer='normal',
activation='softmax'))
訓(xùn)練的準(zhǔn)確率 和 驗(yàn)證的準(zhǔn)確率 差距變小了 .
建立多層感知器模型包含兩層隱藏層
# 建立Sequential 模型
model = Sequential()
# 輸入層 +" 隱藏層"1
model.add(Dense(units=1000,
input_dim=784,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 隱藏層"2
model.add(Dense(units=1000,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 輸出層"
model.add(Dense(units=10,
kernel_initializer='normal',
activation='softmax'))
print(model.summary())
代碼:
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
import numpy as np
from keras.utils import np_utils
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
import pandas as pd
import os
np.random.seed(10)
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
(x_train_image ,y_train_label),(x_test_image,y_test_label) = mnist.load_data()
#
# print('train data = ' ,len(X_train_image)) #
# print('test data = ',len(X_test_image))
def plot_image(image):
fig = plt.gcf()
fig.set_size_inches(2,2) # 設(shè)置圖形的大小
plt.imshow(image,cmap='binary') # 傳入圖像image ,cmap 參數(shù)設(shè)置為 binary ,以黑白灰度顯示
plt.show()
def plot_images_labels_prediction(images, labels,
prediction, idx, num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14)
if num > 25: num = 25
for i in range(0, num):
ax = plt.subplot(5, 5, 1 + i)# 分成 5 X 5 個子圖顯示, 第三個參數(shù)表示第幾個子圖
ax.imshow(images[idx], cmap='binary')
title = "label=" + str(labels[idx])
if len(prediction) > 0:
title += ",predict=" + str(prediction[idx])
ax.set_title(title, fontsize=10)
ax.set_xticks([])
ax.set_yticks([])
idx += 1
plt.show()
def show_train_history(train_history,train,validation) :
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title("Train_history")
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train','validation'],loc='upper left')
plt.show()
# plot_images_labels_prediction(x_train_image,y_train_image,[],0,10)
#
# plot_images_labels_prediction(x_test_image,y_test_image,[],0,10)
print("x_train_image : " ,len(x_train_image) , x_train_image.shape )
print("y_train_label : ", len(y_train_label) , y_train_label.shape)
# 將 image 以 reshape 轉(zhuǎn)化
x_Train = x_train_image.reshape(60000,784).astype('float32')
x_Test = x_test_image.reshape(10000,784).astype('float32')
# print('x_Train : ' ,x_Train.shape)
# print('x_Test' ,x_Test.shape)
# 標(biāo)準(zhǔn)化
x_Test_normalize = x_Test/255
x_Train_normalize = x_Train/255
# print(x_Train_normalize[0]) # 訓(xùn)練集中的第一個數(shù)字的標(biāo)準(zhǔn)化
# 將訓(xùn)練集和測試集標(biāo)簽都進(jìn)行獨(dú)熱碼轉(zhuǎn)化
y_TrainOneHot = np_utils.to_categorical(y_train_label)
y_TestOneHot = np_utils.to_categorical(y_test_label)
print(y_TrainOneHot[:5]) # 查看前5項(xiàng)的標(biāo)簽
# 建立Sequential 模型
model = Sequential()
model.add(Dense(units=1000,
input_dim=784,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 隱藏層"2
model.add(Dense(units=1000,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
model.add(Dense(units=10,
kernel_initializer='normal',
activation='softmax'))
print(model.summary())
# 訓(xùn)練方式
model.compile(loss='categorical_crossentropy' ,optimizer='adam',metrics=['accuracy'])
# 開始訓(xùn)練
train_history =model.fit(x=x_Train_normalize,
y=y_TrainOneHot,validation_split=0.2,
epochs=10, batch_size=200,verbose=2)
show_train_history(train_history,'acc','val_acc')
scores = model.evaluate(x_Test_normalize,y_TestOneHot)
print()
print('accuracy=',scores[1] )
prediction = model.predict_classes(x_Test)
print(prediction)
plot_images_labels_prediction(x_test_image,y_test_label,prediction,idx=340)
showMetrix = pd.crosstab(y_test_label,prediction,colnames=['label',],rownames=['predict'])
print(showMetrix)
df = pd.DataFrame({'label ':y_test_label, 'predict':prediction})
print(df)
#
#
# plot_image(x_train_image[0])
#
# print(y_train_image[0])
代碼2:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense , Dropout ,Deconv2D
from keras.utils import np_utils
from keras.datasets import mnist
from keras.optimizers import SGD
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
def load_data():
(x_train,y_train),(x_test,y_test) = mnist.load_data()
number = 10000
x_train = x_train[0:number]
y_train = y_train[0:number]
x_train =x_train.reshape(number,28*28)
x_test = x_test.reshape(x_test.shape[0],28*28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
y_train = np_utils.to_categorical(y_train,10)
y_test = np_utils.to_categorical(y_test,10)
x_train = x_train/255
x_test = x_test /255
return (x_train,y_train),(x_test,y_test)
(x_train,y_train),(x_test,y_test) = load_data()
model = Sequential()
model.add(Dense(input_dim=28*28,units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim=10,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=10000,epochs=20)
res1 = model.evaluate(x_train,y_train,batch_size=10000)
print("
Train Acc :",res1[1])
res2 = model.evaluate(x_test,y_test,batch_size=10000)
print("
Test Acc :",res2[1])
以上就是Keras載入mnist數(shù)據(jù)集的的文章的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持W3Cschool。