本家様 https://github.com/google-deepmind/alphafold3
12月にリリースかなと思ったのですが11月上旬にリリース.
ここでは rockylinux9での構築をしてみます.
AlphaFold2と同じようにdockerコンテナで実行できるようにするのがオリジナルみたい.
まずは下記記載のドキュメント通りに作ってみます
https://github.com/google-deepmind/alphafold3/blob/main/docs/installation.md
Toolkit for alphafold3 input and output files https://github.com/cddlab/alphafold3_tools
インストール対象マシンは下記の感じです
[root@rockylinux9 ~]# cat /etc/redhat-release
Rocky Linux release 9.5 (Blue Onyx)
[root@rockylinux9 ~]# getenforce
Enforcing
[root@rockylinux9 ~]# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 570.153.02 Tue May 13 16:34:43 UTC 2025
GCC version: gcc version 11.5.0 20240719 (Red Hat 11.5.0-2) (GCC)
[root@rockylinux9 ~]# nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 1070 (UUID: GPU-a49de51b-de1e-52f3-1e3f-ce704e159713)
[root@rockylinux9 ~]# ls -l /usr/local/cuda
ls: cannot access '/usr/local/cuda': No such file or directory
[root@rockylinux9 ~]#[root@rockylinux9 ~]# dnf -y install dnf-plugins-core
[root@rockylinux9 ~]# dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
Adding repo from: https://download.docker.com/linux/rhel/docker-ce.repo
[root@rockylinux9 ~]# ls /etc/yum.repos.d/
docker-ce.repo rocky-addons.repo rocky-devel.repo rocky-extras.repo rocky.repo <--「docker-ce.repo」が追加される
[root@rockylinux9 ~]#
[root@rockylinux9 ~]# sed -i s'/enabled=1/enabled=0/' /etc/yum.repos.d/docker-ce.repo使用するリポジトリは「docker-ce-stable」となる. 詳細は「dnf repolist -v docker-ce-stable」. 中身は「dnf list available --disablerepo=* --enablerepo=docker-ce-stable」で見れる.
[root@rockylinux9 ~]# dnf --enablerepo=docker-ce-stable install docker-ce
docker-ce, docker-ce-cli, docker-buildx-plugin, docker-ce-rootless-extras, docker-compose-plugin, containerd.io がインストールされる
っで起動
[root@rockylinux9 ~]# systemctl enable docker --now「systemctl status docker」の中身を見ると「/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock」で実行なので「/var/lib/docker」にイメージが保存されます.
別の場所に変えたいなら「--data-root」を使って変更します. 「/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --data-root /docker-images 」とかで.
次にNVIDIA Container Toolkitをインストールします
[root@rockylinux9 ~]# curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | tee /etc/yum.repos.d/nvidia-container-toolkit.repo
[root@rockylinux9 ~]# sed -i s'/enabled=1/enabled=0/' /etc/yum.repos.d/nvidia-container-toolkit.repo追加したリポジトリ「nvidia-container-toolkit」の詳細は「dnf repolist -v nvidia-container-toolkit」. 中身は「dnf list available --disablerepo=* --enablerepo=nvidia-container-toolkit」で見れる
っでインストールします
[root@rockylinux9 ~]# dnf --enablerepo=nvidia-container-toolkit install nvidia-container-toolkit
nvidia-container-toolkit, libnvidia-container-tools, libnvidia-container1, nvidia-container-toolkit-base がインストールされる
dockerを再起動させて有効にします
[root@rockylinux9 ~]# systemctl restart dockerっでテスト
[root@rockylinux9 ~]# nvidia-container-cli info
NVRM version: 570.153.02
CUDA version: 12.8
Device Index: 0
Device Minor: 0
Model: NVIDIA GeForce GTX 1070
Brand: GeForce
GPU UUID: GPU-a49de51b-de1e-52f3-1e3f-ce704e159713
Bus Location: 00000000:06:10.0
Architecture: 6.1
[root@rockylinux9 ~]#
[root@rockylinux9 ~]# docker run --gpus all --rm nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi -L
:
GPU 0: NVIDIA GeForce GTX 1070 (UUID: GPU-a49de51b-de1e-52f3-1e3f-ce704e159713)
[root@rockylinux9 ~]#これでdockerの構築が完了.
そのままでは一般ユーザがdockerを使えない. 一般ユーザでdockerを利用するにはgroup「docker」にそのアカウントを登録するか、
利用ユーザが「dockerd-rootless-setuptool.sh --skip-iptables install」を実行してrootless dockerを有効にさせる.
[saber@rockylinux9 ~]$ dockerd-rootless-setuptool.sh --skip-iptables install
:
[INFO] Installed docker.service successfully.
[INFO] To control docker.service, run: `systemctl --user (start|stop|restart) docker.service`
[INFO] To run docker.service on system startup, run: `sudo loginctl enable-linger saber`
:
[INFO] Some applications may require the following environment variable too:
export DOCKER_HOST=unix:///run/user/1000/docker.sock
:
[saber@rockylinux9 ~]$ ps -ef |grep docker
(この段階でユーザアカウントのdockerが動いてます. ログアウトするとdockerは止まります. 再度ログインするとdockerは自動的に動きます)その後にrootで「/etc/nvidia-container-runtime/config.toml」にある「#no-cgroups = false」を「no-cgroups = true」に変更する. これでgpuを一般ユーザでもrootlessで使える.
systemのbootに合わせて、ログインの有無にかかわらずユーザのdockerを動かすには「sudo loginctl enable-linger saber」を有効にすればいいみたい
っで本題へ.
せっかくdockerをrootlessで作ったので一般ユーザでalphafold3を作ってみます.
[saber@rockylinux9 ~]$ git clone https://github.com/google-deepmind/alphafold3.git
[saber@rockylinux9 ~]$ cd alphafold3/
[saber@rockylinux9 alphafold3]$ git log -1
commit 64723739f52944274485118cab935d53d66b5aec (HEAD -> main, origin/main, origin/HEAD)
Author: Augustin Zidek <augustinzidek@google.com>
Date: Fri May 30 09:23:19 2025 -0700
Update to Ubuntu 24.04 / CUDA 12.6.3 base image and use Python 3.12
PiperOrigin-RevId: 765219906
Change-Id: I271f442012ee30356ef316eda87abd231319a673
[saber@rockylinux9 alphafold3]$ ls -CF
CMakeLists.txt legal/ requirements.txt WEIGHTS_PROHIBITED_USE_POLICY.md
dev-requirements.txt LICENSE run_alphafold_data_test.py WEIGHTS_TERMS_OF_USE.md
docker/ OUTPUT_TERMS_OF_USE.md run_alphafold.py
docs/ pyproject.toml run_alphafold_test.py
fetch_databases.sh README.md src/
[saber@rockylinux9 alphafold3]$
[saber@rockylinux9 alphafold3]$ docker build -t alphafold3 -f docker/Dockerfile .
:
:
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest 15d0b075bc01 About an hour ago 6.81GB
nvidia/cuda 11.8.0-runtime-ubuntu22.04 d8fb74ecc8b2 18 months ago 2.65GB
[saber@rockylinux9 alphafold3]$docker imageの作成に成功しました.
「fetch_databases.sh」が提供されている. 中身を読むと「storage.googleapis.com/alphafold-databases/v3.0」にあるファイルのダウンロードと解凍ですね.
[saber@rockylinux9 alphafold3]$ bash ./fetch_databases.sh /Public/alphafold3
:
(ティータイム)
:
Complete
[saber@rockylinux9 alphafold3]$
[saber@rockylinux9 alphafold3]$ ls -lh /Public/alphafold3
total 394G
-rw-r--r--. 1 1001 2001 17G Jun 1 16:06 bfd-first_non_consensus_sequences.fasta
-rw-r--r--. 1 1001 2001 120G Jun 1 19:45 mgy_clusters_2022_05.fa
drwxr-x---. 2 1001 2001 4.3M Oct 11 2024 mmcif_files
-rw-r--r--. 1 1001 2001 76G Jun 1 18:14 nt_rna_2023_02_23_clust_seq_id_90_cov_80_rep_seq.fasta
-rw-r--r--. 1 1001 2001 223M Jun 1 14:44 pdb_seqres_2022_09_28.fasta
-rw-r--r--. 1 1001 2001 218M Jun 1 14:45 rfam_14_9_clust_seq_id_90_cov_80_rep_seq.fasta
-rw-r--r--. 1 1001 2001 13G Jun 1 15:31 rnacentral_active_seq_id_90_cov_80_linclust.fasta
-rw-r--r--. 1 1001 2001 102G Jun 1 19:27 uniprot_all_2021_04.fa
-rw-r--r--. 1 1001 2001 67G Jun 1 18:31 uniref90_2022_05.fa
[saber@rockylinux9 alphafold3]$下記申請フォームに記載して取得します
https://forms.gle/svvpY4u2jsHEwWYS6
Google DeepMind 側の裁量があるようです. 申請しても貰えない可能性がある
「$HOME/af3-models」にでも配置しておきます
未実施
mkdir $HOME/af_input $HOME/af_output
docker run -it \
--volume $HOME/af_input:/root/af_input \
--volume $HOME/af_output:/root/af_output \
--volume $HOME/af3-models:/root/models \
--volume /Public/alphafold3:/root/public_databases \
--gpus all \
alphafold3 \
python run_alphafold.py \
--json_path=/root/af_input/fold_input.json \
--model_dir=/root/models \
--output_dir=/root/af_output作ったdokcerイメージを Singularity image file に変換してdocker不要で実行できるようにしてみます
singularityのインストール
[root@rockylinux9 ~]# dnf install epel-release -y
[root@rockylinux9 ~]# dnf install singularity-ce -y[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 7 minutes ago 6.78GB
[saber@rockylinux9 alphafold3]$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 8 minutes ago 6.78GB
registry 2 26b2eb03618e 20 months ago 25.4MB
[saber@rockylinux9 alphafold3]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 34 seconds ago Up 32 seconds 0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp registry
[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 47 seconds ago Up 45 seconds 0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp registry
[saber@rockylinux9 alphafold3]$
[saber@rockylinux9 alphafold3]$ docker tag alphafold3 localhost:5000/alphafold3
[saber@rockylinux9 alphafold3]$ docker ps -a (「docker ps」も同じ)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 3 minutes ago Up 3 minutes 0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp registry
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 11 minutes ago 6.78GB
localhost:5000/alphafold3 latest dea05120089d 11 minutes ago 6.78GB
registry 2 26b2eb03618e 20 months ago 25.4MB
[saber@rockylinux9 alphafold3]$ docker push localhost:5000/alphafold3
[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 10 minutes ago Up 10 minutes 0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp registry
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 5 hours ago 6.78GB
localhost:5000/alphafold3 latest dea05120089d 5 hours ago 6.78GB
registry 2 26b2eb03618e 20 months ago 25.4MB
[saber@rockylinux9 alphafold3]$
(変換)
[saber@rockylinux9 alphafold3]$ SINGULARITY_NOHTTPS=1 singularity build alphafold3.sif docker://localhost:5000/alphafold3:latest
:
INFO: Extracting OCI image...
INFO: Inserting Singularity configuration...
INFO: Creating SIF file...
INFO: Build complete: alphafold3.sif
[saber@rockylinux9 alphafold3]$ ls -lh alphafold3.sif
-rwxr-xr-x. 1 saber saber 2.8G Jun 1 15:17 alphafold3.sif
[saber@rockylinux9 alphafold3]$
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 5 hours ago 6.78GB
localhost:5000/alphafold3 latest dea05120089d 5 hours ago 6.78GB
registry 2 26b2eb03618e 20 months ago 25.4MB
[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 16 minutes ago Up 16 minutes 0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp registry
[saber@rockylinux9 alphafold3]$
後始末
[saber@rockylinux9 alphafold3]$ docker stop registry
[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d6e05c348fb registry:2 "/entrypoint.sh /etc…" 17 minutes ago Exited (2) 5 seconds ago registry
[saber@rockylinux9 alphafold3]$ docker rm registry
[saber@rockylinux9 alphafold3]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[saber@rockylinux9 alphafold3]$
[saber@rockylinux9 alphafold3]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alphafold3 latest dea05120089d 5 hours ago 6.78GB
localhost:5000/alphafold3 latest dea05120089d 5 hours ago 6.78GB
registry 2 26b2eb03618e 20 months ago 25.4MB
[saber@rockylinux9 alphafold3]$ docker rmi localhost:5000/alphafold3 26b2eb03618e
[saber@rockylinux9 alphafold3]$ docker rmi dea05120089d配列構造データベースが「/Public/alphafold3」にあって
取得したmodelファイルが「/Public/af3-model」にあるとして
slurmな環境なら今いるディレクトリに「af_input」と「af_output」を掘って
「af_input/fold_input.json」のインプットファイルを作って
下記batchを動かす
#!/bin/bash
#SBATCH -J af3-test
#SBATCH -o %j.out
#SBATCH -e %j.err
#SBATCH -p workq
#SBATCH -n 8
#SBATCH --gres=gpu:1
cd $SLURM_SUBMIT_DIR
MODEL_DIR=/Public/af3-model
DB_DIR=/Public/alphafold3
singularity exec \
--nv \
--bind $SLURM_SUBMIT_DIR/af_input:/root/af_input \
--bind $SLURM_SUBMIT_DIR/af_output:/root/af_output \
--bind $MODEL_DIR:/root/models \
--bind $DB_DIR:/root/public_databases \
/apps/alphafold3.sif \
python /app/alphafold/run_alphafold.py \
--json_path=/root/af_input/fold_input.json \
--model_dir=/root/models \
--db_dir=/root/public_databases \
--output_dir=/root/af_output