失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > [Keras深度学习浅尝]实战四· Embedding实现 IMDB数据集影评文本分类

[Keras深度学习浅尝]实战四· Embedding实现 IMDB数据集影评文本分类

时间:2019-10-24 04:09:16

相关推荐

[Keras深度学习浅尝]实战四· Embedding实现 IMDB数据集影评文本分类

[Keras深度学习浅尝]实战四· Embedding实现 IMDB数据集影评文本分类

此实战来源于TensorFlow Keras官方教程

先更新代码在这里,后面找时间理解注释一下。

# TensorFlow and tf.kerasimport osos.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"import tensorflow as tffrom tensorflow import keras# Helper librariesimport numpy as npimport matplotlib.pyplot as pltprint(tf.__version__)

1.12.0

imdb = keras.datasets.imdb(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

Downloading data from /tensorflow/tf-keras-datasets/imdb.npz17465344/17464789 [==============================] - 12s 1us/step

print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))

Training entries: 25000, labels: 25000

print(train_data[0])len(train_data[0]), len(train_data[1])

[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32](218, 189)

# A dictionary mapping words to an integer indexword_index = imdb.get_word_index()# The first indices are reservedword_index = {k:(v+3) for k,v in word_index.items()}word_index["<PAD>"] = 0word_index["<START>"] = 1word_index["<UNK>"] = 2 # unknownword_index["<UNUSED>"] = 3reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])

Downloading data from /tensorflow/tf-keras-datasets/imdb_word_index.json1646592/1641221 [==============================] - 2s 1us/step

decode_review(train_data[0])

"<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all"

train_data = keras.preprocessing.sequence.pad_sequences(train_data,value=word_index["<PAD>"],padding='post',maxlen=256)test_data = keras.preprocessing.sequence.pad_sequences(test_data,value=word_index["<PAD>"],padding='post',maxlen=256)

len(train_data[0]), len(train_data[1])

(256, 256)

print(train_data[0])

[ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 39414 173 36 256 5 25 100 43 838 112 50 670 2 935 480 284 5 150 4 172 112 167 2 336 385 39 4172 4536 1111 17 546 38 13 447 4 192 50 16 6 1472025 19 14 22 4 1920 4613 469 4 22 71 87 12 1643 530 38 76 15 13 1247 4 22 17 515 17 12 16626 18 2 5 62 386 12 8 316 8 106 5 4 22235244 16 480 66 3785 33 4 130 12 16 38 619 5 25124 51 36 135 48 25 1415 33 6 22 12 215 28 7752 5 14 407 16 82 2 8 4 107 117 5952 15 2564 2 7 3766 5 723 36 71 43 530 476 26 400 31746 7 4 2 1029 13 104 88 4 381 15 297 98 322071 56 26 141 6 194 7486 18 4 226 22 21 134 47626 480 5 144 30 5535 18 51 36 28 224 92 25 1044 226 65 16 38 1334 88 12 16 283 5 16 4472 113103 32 15 16 5345 19 178 32 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0]

网络模型的介绍:

1,输入网络的形状为(-1,256)

2,Embedding后为(-1,256,16)网络参数为(10000,16)

3,GlobalAveragePooling1D后为(-1,16)详细介绍见此

4,Dense1后(-1,16)网络参数为w:1616 + b:116 共计272

4,Dense2后(-1,1)网络参数为w:161 + b:11 共计17个参数

vocab_size = 10000model = keras.Sequential()model.add(keras.layers.Embedding(vocab_size, 16))model.add(keras.layers.GlobalAveragePooling1D())model.add(keras.layers.Dense(16, activation=tf.nn.relu))model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))model.summary()

_________________________________________________________________Layer (type) Output Shape Param # =================================================================embedding (Embedding) (None, None, 16)160000 _________________________________________________________________global_average_pooling1d (Gl (None, 16)0 _________________________________________________________________dense (Dense)(None, 16)272 _________________________________________________________________dense_1 (Dense) (None, 1) 17 =================================================================Total params: 160,289Trainable params: 160,289Non-trainable params: 0_________________________________________________________________

pile(optimizer=tf.train.AdamOptimizer(),loss='binary_crossentropy',metrics=['accuracy'])

x_val = train_data[:10000]partial_x_train = train_data[10000:]y_val = train_labels[:10000]partial_y_train = train_labels[10000:]

history = model.fit(partial_x_train,partial_y_train,epochs=20,batch_size=512,validation_data=(x_val, y_val),verbose=1)

Train on 15000 samples, validate on 10000 samplesEpoch 1/000/15000 [==============================] - 3s 215us/step - loss: 0.6919 - acc: 0.5925 - val_loss: 0.6899 - val_acc: 0.6360Epoch 2/000/15000 [==============================] - 2s 159us/step - loss: 0.6863 - acc: 0.7131 - val_loss: 0.6824 - val_acc: 0.7418Epoch 3/000/15000 [==============================] - 2s 155us/step - loss: 0.6746 - acc: 0.7652 - val_loss: 0.6676 - val_acc: 0.7583Epoch 4/000/15000 [==============================] - 2s 153us/step - loss: 0.6534 - acc: 0.7707 - val_loss: 0.6440 - val_acc: 0.7636Epoch 5/000/15000 [==============================] - 2s 153us/step - loss: 0.6221 - acc: 0.7933 - val_loss: 0.6104 - val_acc: 0.7872Epoch 6/000/15000 [==============================] - 2s 153us/step - loss: 0.5820 - acc: 0.8095 - val_loss: 0.5713 - val_acc: 0.7985Epoch 7/000/15000 [==============================] - 2s 154us/step - loss: 0.5368 - acc: 0.8271 - val_loss: 0.5297 - val_acc: 0.8163Epoch 8/000/15000 [==============================] - 2s 159us/step - loss: 0.4907 - acc: 0.8427 - val_loss: 0.4891 - val_acc: 0.8306Epoch 9/000/15000 [==============================] - 3s 170us/step - loss: 0.4478 - acc: 0.8557 - val_loss: 0.4525 - val_acc: 0.8405Epoch 10/000/15000 [==============================] - 2s 165us/step - loss: 0.4089 - acc: 0.8692 - val_loss: 0.4213 - val_acc: 0.8482Epoch 11/000/15000 [==============================] - 2s 156us/step - loss: 0.3760 - acc: 0.8791 - val_loss: 0.3977 - val_acc: 0.8541Epoch 12/000/15000 [==============================] - 2s 153us/step - loss: 0.3483 - acc: 0.8852 - val_loss: 0.3745 - val_acc: 0.8616Epoch 13/000/15000 [==============================] - 3s 171us/step - loss: 0.3236 - acc: 0.8929 - val_loss: 0.3581 - val_acc: 0.8661Epoch 14/000/15000 [==============================] - 3s 171us/step - loss: 0.3031 - acc: 0.8981 - val_loss: 0.3436 - val_acc: 0.8711Epoch 15/000/15000 [==============================] - 3s 178us/step - loss: 0.2854 - acc: 0.9033 - val_loss: 0.3322 - val_acc: 0.8732Epoch 16/000/15000 [==============================] - 3s 173us/step - loss: 0.2702 - acc: 0.9057 - val_loss: 0.3230 - val_acc: 0.8755Epoch 17/000/15000 [==============================] - 2s 165us/step - loss: 0.2557 - acc: 0.9131 - val_loss: 0.3152 - val_acc: 0.8771Epoch 18/000/15000 [==============================] - 2s 155us/step - loss: 0.2431 - acc: 0.9171 - val_loss: 0.3087 - val_acc: 0.8799Epoch 19/000/15000 [==============================] - 2s 155us/step - loss: 0.2315 - acc: 0.9213 - val_loss: 0.3033 - val_acc: 0.8812Epoch 20/000/15000 [==============================] - 2s 164us/step - loss: 0.2213 - acc: 0.9236 - val_loss: 0.2991 - val_acc: 0.8821

results = model.evaluate(test_data, test_labels)print(results)

25000/25000 [==============================] - 1s 38us/step[0.3124048164367676, 0.87232]

history_dict = history.historyhistory_dict.keys()

dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])

import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)# "bo" is for "blue dot"plt.plot(epochs, loss, 'bo', label='Training loss')# b is for "solid blue line"plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.xlabel('Epochs')plt.ylabel('Loss')plt.legend()plt.show()

plt.clf() # clear figureacc_values = history_dict['acc']val_acc_values = history_dict['val_acc']plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.xlabel('Epochs')plt.ylabel('Accuracy')plt.legend()plt.show()

如果觉得《[Keras深度学习浅尝]实战四· Embedding实现 IMDB数据集影评文本分类》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。