You may want to review this article which has a good description of hdfs writes. it should be immediate depending upon how busy the cluster is:
https://data-flair.training/blogs/hdfs-data-write-operation/
What happens if DataNode fails while writing a file in the HDFS?
While writing data to the DataNode, if DataNode fails, then the following actions take place, which is transparent to the client writing the data.
- The pipeline gets closed, packets in the ack queue are then added to the front of the data queue making DataNodes downstream from the failed node to not miss any packet.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…