Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.1k views
in Technique[技术] by (71.8m points)

tensorflow - keras add external trainable variable to graph

I am working on language modelling and the vocabulary is large. So I want to use sampled_softmax_loss from tensorflow. The problem is that weights and biases which are the arguments of the sampled_softmax_loss function seems not trainable (their values don't change after training)

So I guess that I should add them to the computation graph building automatically by keras Model, but I spent a lot of time and still haven't find a proper way to do so.

So, once again. I want to add external trainable tf.Variables to the keras computation graph. Does anyone know the method to do so?

my model (head and tail)

input_sentence = Input(shape=(INPUT_LENGTH,), dtype='int32')
words = Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
                  weights=[embedding_matrix], trainable=True)(input_sentence)

...

context = Dense(256, activation='tanh')(context)

model = Model(inputs=input_sentence, outputs=context, name=name)

loss

def softmax_fine_loss(labels, logits, transposed_W=None, b=None):
     res = tf.map_fn(lambda (__labels, __logits): tf.nn.sampled_softmax_loss(transposed_W, b, __labels, __logits, 
                                                                        num_sampled=1000, num_classes=OUTPUT_COUNT+1), 
                (labels, logits), dtype=tf.float32)
     return res

loss = lambda labels, logits: softmax_fine_loss(labels, logits, transposed_W=transposed_W, b=b)

model_truncated.compile(optimizer=optimizer, loss=loss, sample_weight_mode='temporal')
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I have finally found a workaround

Let's say we need to train weights W and biases b with our model.

So the workaround is just add them to one of the trainable layers of our model.

model.layers[-1].trainable_weights.extend([W, b])

When we can compile the model

model.compile(...)

It is extremely important to add variables to trainable layer, for example I've experimented with Sequential model, and adding [W, b] to the Activation layer does not make them actually trainable.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...