cryosparcm cluster example slurm

cluster_info.json内にある「qstat_code_cmd_tpl」のコマンド行が微妙でエラーになります
どうもパイプ(|)がダメらしくて下記に変更します

"qstat_code_cmd_tpl": "squeue -j {{ cluster_job_id } } --format=%T | sed -n 2p",
 ↓
"qstat_code_cmd_tpl": "squeue --noheader -j {{ cluster_job_id } } --format=%T",

Pukiwikiの表記のため一部表記を変えてます

{
    "name" : "node01",
    "worker_bin_path" : "/home/cryosparc/cryosparc_worker/bin/cryosparcw",
    "cache_path" : "/scratch/cryosparc",
    "send_cmd_tpl" : "{{ command } }",
    "qsub_cmd_tpl" : "sbatch {{ script_path_abs } }",
    "qstat_cmd_tpl" : "squeue -j {{ cluster_job_id } }",
    "qstat_code_cmd_tpl": "squeue --noheader -j {{ cluster_job_id } } --format=%T",
    "qdel_cmd_tpl" : "scancel {{ cluster_job_id } }",
    "qinfo_cmd_tpl" : "sinfo",
    "transfer_cmd_tpl" : "cp {{ src_path } } {{ dest_path } }"
}

あとcluster_script.shはデフォのままかな Pukiwikiの表記のため一部表記を変えてます

#!/usr/bin/env bash
#### cryoSPARC cluster submission script template for SLURM
## Available variables:
## {{ run_cmd } }            - the complete command string to run the job
## {{ num_cpu } }            - the number of CPUs needed
## {{ num_gpu } }            - the number of GPUs needed. 
##                            Note: the code will use this many GPUs starting from dev id 0
##                                  the cluster scheduler or this script have the responsibility
##                                  of setting CUDA_VISIBLE_DEVICES so that the job code ends up
##                                  using the correct cluster-allocated GPUs.
## {{ ram_gb } }             - the amount of RAM needed in GB
## {{ job_dir_abs } }        - absolute path to the job directory
## {{ project_dir_abs } }    - absolute path to the project dir
## {{ job_log_path_abs } }   - absolute path to the log file for the job
## {{ worker_bin_path } }    - absolute path to the cryosparc worker command
## {{ run_args } }           - arguments to be passed to cryosparcw run
## {{ project_uid } }        - uid of the project
## {{ job_uid } }            - uid of the job
## {{ job_creator } }        - name of the user that created the job (may contain spaces)
## {{ cryosparc_username } } - cryosparc username of the user that created the job (usually an email)
##
## What follows is a simple SLURM script:
 
#SBATCH --job-name cryosparc_{{ project_uid } }_{{ job_uid } }
#SBATCH -n {{ num_cpu } }
#SBATCH --gres=gpu:{{ num_gpu } }
#SBATCH --partition=gpu
#SBATCH --mem={{ (ram_gb*1000)|int } }MB
#SBATCH --output={{ job_log_path_abs } }
#SBATCH --error={{ job_log_path_abs } }
 
available_devs=""
for devidx in $(seq 0 15);
do
    if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then
        if [[ -z "$available_devs" ]] ; then
            available_devs=$devidx
        else
            available_devs=$available_devs,$devidx
        fi
    fi
done
export CUDA_VISIBLE_DEVICES=$available_devs
 
{{ run_cmd } }
最新の60件
2023-12-06 2023-12-05 2023-11-30 2023-11-27 2023-11-21 2023-11-19 2023-11-18 2023-11-14 2023-11-10 2023-11-09 2023-11-05 2023-11-03 2023-10-31 2023-10-30 2023-10-26 2023-10-24 2023-10-19 2023-10-16 2023-10-15 2023-10-12 2023-10-11 2023-10-09 2023-10-03 2023-10-02 2023-09-30 2023-09-29 2023-09-26 2023-09-24 2023-09-19 2023-09-18 2023-09-17 2023-09-16 2023-09-14 2023-09-12 2023-09-11 2023-09-08 2023-09-05 2023-09-02 2023-08-30 2023-08-29

edit


トップ   編集 差分 履歴 添付 複製 名前変更 リロード   新規 一覧 検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2023-08-27 (日) 21:42:36