I have access to a cluster running PBS Pro and would like to keep a PBSCluster instance running on the headnode. My current (obviously broken) script is:
import dask_jobqueue
from paths import get_temp_dir
def main():
temp_dir = get_temp_dir()
scheduler_options = {'scheduler_file': temp_dir / 'scheduler.json'}
cluster = dask_jobqueue.PBSCluster(cores=24, memory='100GB', processes=1, scheduler_options=scheduler_options)
if __name__ == '__main__':
main()
This script is obviously broken because after the cluster is created the main()
function exits and the cluster is destroyed.
I imagine I must call some sort of execute_io_loop
function, but I can't find anything in the API.
So, how can I keep my PBSCluster alive?
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…