ブログ名

【第10回】簡易体積算出アルゴリズム テスト学習

今後の予定

  • Khold部を再検討
  • パラメータチューニング
  • 評価

テスト学習

モデル構築と学習・評価プログラムの作成が完了したため、ハイパーパラメータを適当に決めてベースラインを決定します。
パラメータは以下のようにしました。

loss function = mean_squared_error
optimizer = adam
hidden_neurons = 1000
out_neurons = 3
batch_size = 16
nb_epochs = 2
fold_num = 5

このパラメータの結果から学習傾向と検証データのaccuracy、lossを確認していき、学習方針の決定や手動でのチューニングを行います。
クロスバリデーションの4つ目を実行する前にGPUメモリ不足のため停止してしまいましたが、コンソール出力結果を下記に示します。

コンソール出力結果
Using TensorFlow backend.
tf.estimator package not installed.
tf.estimator package not installed.
(1000, 224, 224, 3)
(1000, 3)
(800, 224, 224, 3) (200, 224, 224, 3) (800, 3) (200, 3)
/home/ec2-user/.local/lib/python3.6/site-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of `ResNet50(include_top=False)` has been changed since Keras 2.2.0.
  warnings.warn('The output shape of `ResNet50(include_top=False)` '
/usr/local/lib/python3.6/site-packages/ipykernel_launcher.py:80: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=Tensor("in..., outputs=Tensor("se...)`
Train on 640 samples, validate on 160 samples
Epoch 1/200
640/640 [==============================] - 45s 71ms/step - loss: 23.6305 - acc: 0.3906 - val_loss: 3.1018 - val_acc: 0.5000
Epoch 2/200
640/640 [==============================] - 28s 43ms/step - loss: 0.2863 - acc: 0.3562 - val_loss: 0.1092 - val_acc: 0.5000
Epoch 3/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0832 - acc: 0.6172 - val_loss: 0.0951 - val_acc: 0.5000
Epoch 4/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0835 - acc: 0.5750 - val_loss: 0.0966 - val_acc: 0.5000
Epoch 5/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0892 - acc: 0.5125 - val_loss: 3.9544 - val_acc: 0.4938
Epoch 6/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0881 - acc: 0.4781 - val_loss: 2.4041 - val_acc: 0.4250
Epoch 7/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0875 - acc: 0.4781 - val_loss: 0.2641 - val_acc: 0.5250
Epoch 8/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0868 - acc: 0.4953 - val_loss: 0.0955 - val_acc: 0.5062
Epoch 9/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0870 - acc: 0.4937 - val_loss: 0.0952 - val_acc: 0.5062
Epoch 10/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0874 - acc: 0.4844 - val_loss: 0.0977 - val_acc: 0.5000
Epoch 11/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0873 - acc: 0.5250 - val_loss: 0.0956 - val_acc: 0.5000
Epoch 12/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0881 - acc: 0.5188 - val_loss: 0.0952 - val_acc: 0.5000
Epoch 13/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0882 - acc: 0.5094 - val_loss: 0.0960 - val_acc: 0.5000
Epoch 00013: early stopping
<Figure size 640x480 with 1 Axes>
<Figure size 640x480 with 1 Axes>
Train on 640 samples, validate on 160 samples
Epoch 1/200
640/640 [==============================] - 43s 67ms/step - loss: 20.5028 - acc: 0.1172 - val_loss: 1.1231 - val_acc: 0.5250
Epoch 2/200
640/640 [==============================] - 28s 44ms/step - loss: 0.2839 - acc: 0.4781 - val_loss: 0.0956 - val_acc: 0.5250
Epoch 3/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0936 - acc: 0.4953 - val_loss: 0.0870 - val_acc: 0.4750
Epoch 4/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0912 - acc: 0.4812 - val_loss: 0.0886 - val_acc: 0.5250
Epoch 5/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0907 - acc: 0.4812 - val_loss: 0.0866 - val_acc: 0.4750
Epoch 6/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0916 - acc: 0.4625 - val_loss: 0.0865 - val_acc: 0.4750
Epoch 7/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0900 - acc: 0.4594 - val_loss: 0.0858 - val_acc: 0.4750
Epoch 8/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0901 - acc: 0.4875 - val_loss: 0.0865 - val_acc: 0.4750
Epoch 9/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0902 - acc: 0.5000 - val_loss: 0.0855 - val_acc: 0.4750
Epoch 10/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0911 - acc: 0.4812 - val_loss: 0.0864 - val_acc: 0.4750
Epoch 11/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0938 - acc: 0.5250 - val_loss: 0.0856 - val_acc: 0.4750
Epoch 12/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0923 - acc: 0.4906 - val_loss: 0.0890 - val_acc: 0.4750
Epoch 13/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0904 - acc: 0.4969 - val_loss: 0.0966 - val_acc: 0.5250
Epoch 14/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0907 - acc: 0.5000 - val_loss: 0.0933 - val_acc: 0.5250
Epoch 15/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0919 - acc: 0.4844 - val_loss: 0.0867 - val_acc: 0.5250
Epoch 16/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0929 - acc: 0.5031 - val_loss: 0.0889 - val_acc: 0.4750
Epoch 17/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0905 - acc: 0.5063 - val_loss: 0.0909 - val_acc: 0.5250
Epoch 18/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0911 - acc: 0.4937 - val_loss: 0.0883 - val_acc: 0.5250
Epoch 19/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0914 - acc: 0.5094 - val_loss: 0.0875 - val_acc: 0.4750
Epoch 00019: early stopping
<Figure size 640x480 with 1 Axes>
<Figure size 640x480 with 1 Axes>
Train on 640 samples, validate on 160 samples
Epoch 1/200
640/640 [==============================] - 44s 69ms/step - loss: 18.8080 - acc: 0.4672 - val_loss: 2.0560 - val_acc: 0.2250
Epoch 2/200
640/640 [==============================] - 28s 44ms/step - loss: 0.1538 - acc: 0.6891 - val_loss: 0.1134 - val_acc: 0.5125
Epoch 3/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0432 - acc: 0.8609 - val_loss: 0.0734 - val_acc: 0.6250
Epoch 4/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0256 - acc: 0.9031 - val_loss: 0.0612 - val_acc: 0.8000
Epoch 5/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0187 - acc: 0.9188 - val_loss: 0.0230 - val_acc: 0.8375
Epoch 6/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0133 - acc: 0.9625 - val_loss: 0.0211 - val_acc: 0.9125
Epoch 7/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0149 - acc: 0.9406 - val_loss: 0.0110 - val_acc: 0.8875
Epoch 8/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0139 - acc: 0.9297 - val_loss: 0.0089 - val_acc: 0.9563
Epoch 9/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0161 - acc: 0.9531 - val_loss: 0.0592 - val_acc: 0.7812
Epoch 10/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0312 - acc: 0.8953 - val_loss: 0.0646 - val_acc: 0.6813
Epoch 11/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0315 - acc: 0.8875 - val_loss: 0.0667 - val_acc: 0.8187
Epoch 12/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0241 - acc: 0.8844 - val_loss: 0.0428 - val_acc: 0.8187
Epoch 13/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0256 - acc: 0.8906 - val_loss: 0.0529 - val_acc: 0.8375
Epoch 14/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0177 - acc: 0.9266 - val_loss: 0.0125 - val_acc: 0.9125
Epoch 15/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0158 - acc: 0.9219 - val_loss: 0.0347 - val_acc: 0.8000
Epoch 16/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0143 - acc: 0.9234 - val_loss: 0.0230 - val_acc: 0.8812
Epoch 17/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0134 - acc: 0.9188 - val_loss: 0.0113 - val_acc: 0.9563
Epoch 18/200
640/640 [==============================] - 28s 44ms/step - loss: 0.0121 - acc: 0.9344 - val_loss: 0.0209 - val_acc: 0.8750
Epoch 00018: early stopping

最初の2つのバリデーションは検証精度50%と非常に低い値でしたが、3つ目のバリデーションは85%以上と良いスコアが出ていました。
これはつまり、データに偏りがあることによる精度の変化であると考えられます。
切り分け方を変更する必要があるため、KholdにshuffleをTrueに変更し、randomstate = 71といったように乱数生成シードを付与しました。
また、GPUメモリ不足解消のために、メモリ節約のため中間層を1000から500に変更して再学習を行い、バッチサイズも下げました。
再度クロスバリデーション実行した結果、下記の結果となりました。

Train on 640 samples, validate on 160 samples
Epoch 47/200
640/640 [==============================] - 25s 40ms/step - loss: 0.1106 - acc: 0.7516 - val_loss: 0.0112 - val_acc: 0.9750

Train on 640 samples, validate on 160 samples
Epoch 22/200
640/640 [==============================] - 25s 40ms/step - loss: 0.1453 - acc: 0.7094 - val_loss: 0.0149 - val_acc: 0.9375

Train on 640 samples, validate on 160 samples
Epoch 64/200
640/640 [==============================] - 25s 40ms/step - loss: 0.0872 - acc: 0.7531 - val_loss: 0.0076 - val_acc: 0.9875

Train on 640 samples, validate on 160 samples
Epoch 34/200
640/640 [==============================] - 26s 40ms/step - loss: 0.1295 - acc: 0.7312 - val_loss: 0.0110 - val_acc: 0.9187

Train on 640 samples, validate on 160 samples
Epoch 50/200
640/640 [==============================] - 26s 40ms/step - loss: 0.1251 - acc: 0.7391 - val_loss: 0.0141 - val_acc: 0.9625

学習データでの結果は75%程度でしたが、検証データはでの結果は90%を超えていました。
ここで1つ気がかりなのが、Argumentationの際に類似した画像を生成し、それをランダムに混ぜていたことです。
検証データが、学習データで学習させた画像と似ている場合が多くありました。
次回はパラメータチューニングを行いますが、同時にグルーピングも行い、検証データと学習データに同じ種類のデータが出現しないように工夫する予定です。


次の記事へ

前の記事へ 戻る