Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
792 views
in Technique[技术] by (71.8m points)

tensorflow - Does Model.fit() upload the whole training dataset to the GPU?

I'm training an LSTM on a couple GB dataset using the keras API, tensorflow backend. When running Model.fit() on some in-memory data (numpy), it allocates 8GB of memory in one request, which doesn't happen when loading only a small subset of the data. My GPU can't take both the model parameters and that 8GB, it goes out of memory and stops. I'm pretty sure this started happening after I upgraded to TF2rc from TF2 beta. Here's how I call fit:

tb = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
es = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=patience*2, restore_best_weights=True)
lr_reduce = keras.callbacks.ReduceLROnPlateau(factor=0.1, patience=patience, verbose=1)
chkpointing = keras.callbacks.ModelCheckpoint(weight_fname, monitor='val_loss', verbose=0, save_best_only=True,
                                              save_weights_only=True, mode='auto')

model.fit(train_data_x, train_data_y, validation_data=(test_data_x, test_data_y), batch_size=cfg['batch_size'],
                  epochs=nepochs, validation_freq=1, callbacks=[lr_reduce, es, tb, chkpointing],
                  class_weight=cfg['class_weight'], shuffle=True)

Is allocating space for the whole dataset on the GPU is intended? How can I prevent it from happening?

EDIT:

Updated the code to limit memory allocation. It does limit it, as it shows that TF has access to less memory than before, but it still attempts to allocate that 8.14GB. Here's how I limit the memory and select the GPU:

def select_gpu(gpu_id=-1, max_usage=.5):  # max 2 gpu only
    os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id) if gpu_id != -1 else '0,1'
    gpus = tf.config.experimental.list_physical_devices('GPU')
    max_memory = 11534  # MB got from: grep -i --color memory /var/log/Xorg.0.log
    for gpu in gpus:
        print('GPU FOUND:', gpu)
        tf.config.experimental.set_memory_growth(gpu, True)  # FIXME true
        tf.config.experimental.set_virtual_device_configuration(gpu,
            [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=max_memory * max_usage)])
    print('RUNNING ON GPU #{}'.format(gpu_id))

# ... just call select_gpu(0) in the beginning of the script

Here's the error:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
time_distributed (TimeDistri (None, 42, 256)           7168      
_________________________________________________________________
cu_dnnlstm (CuDNNLSTM)       (None, 42, 256)           526336    
_________________________________________________________________
cu_dnnlstm_1 (CuDNNLSTM)     (None, 42, 256)           526336    
_________________________________________________________________
cu_dnnlstm_2 (CuDNNLSTM)     (None, 42, 256)           526336    
_________________________________________________________________
cu_dnnlstm_3 (CuDNNLSTM)     (None, 42, 256)           526336    
_________________________________________________________________
cu_dnnlstm_4 (CuDNNLSTM)     (None, 42, 256)           526336    
_________________________________________________________________
cu_dnnlstm_5 (CuDNNLSTM)     (None, 256)               526336    
_________________________________________________________________
dense_1 (Dense)              (None, 256)               65792     
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 257       
=================================================================
Total params: 3,231,233
Trainable params: 3,231,233
Non-trainable params: 0
_________________________________________________________________
None
2019-10-27 12:36:48.833843: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 8.14GiB (rounded to 8738821888).  Current allocation summary follows.
2019-10-27 12:36:48.833927: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (256):   Total Chunks: 16, Chunks in use: 15. 4.0KiB allocated for chunks. 3.8KiB in use in bin. 72B client-requested in use in bin.
2019-10-27 12:36:48.833944: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (512):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.833958: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (1024):  Total Chunks: 5, Chunks in use: 4. 5.5KiB allocated for chunks. 4.2KiB in use in bin. 4.0KiB client-requested in use in bin.
2019-10-27 12:36:48.833970: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (2048):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.833982: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (4096):  Total Chunks: 1, Chunks in use: 0. 4.8KiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.833998: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (8192):  Total Chunks: 6, Chunks in use: 6. 49.8KiB allocated for chunks. 49.8KiB in use in bin. 48.0KiB client-requested in use in bin.
2019-10-27 12:36:48.834012: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (16384):     Total Chunks: 1, Chunks in use: 1. 27.0KiB allocated for chunks. 27.0KiB in use in bin. 27.0KiB client-requested in use in bin.
2019-10-27 12:36:48.834023: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (32768):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834034: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (65536):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834045: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (131072):    Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834060: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (262144):    Total Chunks: 1, Chunks in use: 1. 504.0KiB allocated for chunks. 504.0KiB in use in bin. 256.0KiB client-requested in use in bin.
2019-10-27 12:36:48.834073: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (524288):    Total Chunks: 1, Chunks in use: 0. 512.0KiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834088: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (1048576):   Total Chunks: 12, Chunks in use: 12. 12.00MiB allocated for chunks. 12.00MiB in use in bin. 12.00MiB client-requested in use in bin.
2019-10-27 12:36:48.834099: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (2097152):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834110: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (4194304):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834122: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (8388608):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834132: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (16777216):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834143: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (33554432):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834156: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (67108864):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834167: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (134217728):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834180: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (268435456):     Total Chunks: 1, Chunks in use: 0. 4.49GiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-10-27 12:36:48.834193: I tensorflow/core/common_runtime/bfc_allocator.cc:885] Bin for 8.14GiB was 256.00MiB, Chunk State: 
2019-10-27 12:36:48.834213: I tensorflow/core/common_runtime/bfc_allocator.cc:891]   Size: 4.49GiB | Requested Size: 1.00MiB | in_use: 0 | bin_num: 20, prev:   Size: 1.00MiB | Requested Size: 1.00MiB | in_use: 1 | bin_num: -1
2019-10-27 12:36:48.834223: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 4837081088
2019-10-27 12:36:48.834237: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000000 next 1 of size 256
2019-10-27 12:36:48.834247: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000100 next 2 of size 256
2019-10-27 12:36:48.834257: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000200 next 3 of size 1280
2019-10-27 12:36:48.834267: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000700 next 4 of size 256
2019-10-27 12:36:48.834277: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000800 next 5 of size 1024
2019-10-27 12:36:48.834287: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000c00 next 8 of size 256
2019-10-27 12:36:48.834296: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000d00 next 9 of size 256
2019-10-27 12:36:48.834306: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000e00 next 10 of size 256
2019-10-27 12:36:48.834316: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6000f00 next 13 of size 256
2019-10-27 12:36:48.834325: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001000 next 34 of size 256
2019-10-27 12:36:48.834335: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001100 next 35 of size 256
2019-10-27 12:36:48.834344: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001200 next 37 of size 256
2019-10-27 12:36:48.834354: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001300 next 16 of size 256
2019-10-27 12:36:48.834363: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001400 next 14 of size 256
2019-10-27 12:36:48.834373: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free  at 0x7f3cf6001500 next 40 of size 1280
2019-10-27 12:36:48.834382: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6001a00 next 41 of size 1024
2019-10-27 12:36:48.834392: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free  at 0x7f3cf6001e00 next 18 of size 4864
2019-10-27 12:36:48.834402: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6003100 next 19 of size 8192
2019-10-27 12:36:48.834411: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6005100 next 36 of size 1024
2019-10-27 12:36:48.834420: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6005500 next 39 of size 256
2019-10-27 12:36:48.834430: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0x7f3cf6005600 next 42 of size 256
2019-10-27 12:36:48.834439: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUs

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

It's TensorFlow's default behavior, allocating more than it actually needs - though it may not exactly be the dataset that's being allocated, you only need the model and the immediate tensors/data in TF/Keras session, accomplished in TF2 via:

max_memory = 8000 # dedicated memory in MB; run 'dxdiag' to get exact figure
max_usage = 0.95 * max_memory # example for using up to 95%

gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_virtual_device_configuration(
          gpus[0], 
          [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=max_usage)])

Also see TensorFlow Docs on limiting GPU memory growth, and relevant Git.


UPDATE: TF2 eager seems to have a known memory management issue - as a workaround, disable it to work in Eager, which can run significantly faster - see details here:

tf.compat.v1.disable_eager_execution()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...