Project/캡스톤디자인2

[NLP Project] LSTM + self-attention

sillon 2022. 11. 8. 10:40
728x90
반응형

모델에 attention 층 추가

model.add(Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
model.add(SeqSelfAttention(attention_activation='sigmoid'))
model.add(Bidirectional(LSTM(units=64, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
model.add(SeqSelfAttention(attention_activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))

model.py

from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Bidirectional, TimeDistributed
from keras_self_attention import SeqSelfAttention
from keras.optimizers import Adam


def modeling(vocab_size, max_len,tag_size):
    model = Sequential()
    model.add(Embedding(input_dim = vocab_size, output_dim= 128, input_length= max_len, mask_zero=True))
    model.add(Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
    model.add(SeqSelfAttention(attention_activation='sigmoid'))
    model.add(Bidirectional(LSTM(units=64, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
    model.add(SeqSelfAttention(attention_activation='sigmoid'))
    model.add(Dense(1, activation='sigmoid'))
    model.add(TimeDistributed(Dense(tag_size,activation='softmax'))) # 최종 결과 (다중 분류이므로 소프트맥스 함수 사용)
    model.compile(loss= 'categorical_crossentropy',
                    optimizer= Adam(0.005), # 학습률
                    metrics=['accuracy'], 
                    ) # 모델 컴파일
    
    return model

 

역시 lstm 보다는 학습 속도가 많이 느려졌다.

batch_size=128, epochs=3

모델이 무겁고, 학습도 잘 안되어서 중단.(68% 정도로 나옴) 모델 층을 다시 변경했다.

 

def modeling(vocab_size, max_len,tag_size):
    model = Sequential()
    model.add(Embedding(input_dim = vocab_size, output_dim= 128, input_length= max_len, mask_zero=True))
    model.add(Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2)))
    model.add(SeqSelfAttention(attention_activation='relu'))
    model.add(TimeDistributed(Dense(tag_size,activation='softmax'))) # 최종 결과 (다중 분류이므로 소프트맥스 함수 사용)
    model.compile(loss= 'categorical_crossentropy',
                    optimizer= Adam(0.001), # 학습률
                    metrics=['accuracy'], 
                    ) # 모델 컴파일
    
    return model

epoch = 3

lr = 0.001

 

학습로그

더보기

Epoch 1/3
2022-11-08 10:57:54.993638: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10
563/563 [==============================] - 178s 310ms/step - loss: 0.2701 - accuracy: 0.6787 - val_loss: 0.1965 - val_accuracy: 0.6871
Epoch 2/3
563/563 [==============================] - 173s 307ms/step - loss: 0.1933 - accuracy: 0.6886 - val_loss: 0.1828 - val_accuracy: 0.7056
Epoch 3/3
563/563 [==============================] - 173s 307ms/step - loss: 0.1804 - accuracy: 0.7059 - val_loss: 0.1728 - val_accuracy: 0.7212
563/563 [==============================] - 21s 38ms/step - loss: 0.1728 - accuracy: 0.7212
단어             |실제값  |예측값
----------------------------------
무단전재&재배포         |-      |-
금즙               |-      |-
-윤신욱             |PER_B  |PER_B
-                |CVL_B  |PER_B
uk82@mydaily.co.kr|TRM_B  |-

 

에포크를 늘리면 괜찮을지도..

 

해당 모델은 test_model3.h5 로 저장

728x90
반응형