授权 | 开源 |
大小 | 1.51MB |
语言 | Python |
$ pip install horovod
要在具有NCCL的GPU上运行:$ HOROVOD_GPU_OPERATIONS=NCCL pip install horovod
import tensorflow as tf
import horovod.tensorflow as hvd
# Initialize Horovod
hvd.init()
# Pin GPU to be used to process local rank (one GPU per process)
config = tf.ConfigProto()
config.gpu_options.visible_device_list = str(hvd.local_rank())
# Build model...
loss = ...
opt = tf.train.AdagradOptimizer(0.01 * hvd.size())
# Add Horovod Distributed Optimizer
opt = hvd.DistributedOptimizer(opt)
# Add hook to broadcast variables from rank 0 to all other processes during
# initialization.
hooks = [hvd.BroadcastGlobalVariablesHook(0)]
# Make training operation
train_op = opt.minimize(loss)
# Save checkpoints only on worker 0 to prevent other workers from corrupting them.
checkpoint_dir = '/tmp/train_logs' if hvd.rank() == 0 else None
# The MonitoredTrainingSession takes care of session initialization,
# restoring from a checkpoint, saving to a checkpoint, and closing when done
# or an error occurs.
with tf.train.MonitoredTrainingSession(checkpoint_dir=checkpoint_dir,
config=config,
hooks=hooks) as mon_sess:
while not mon_sess.should_stop():
# Perform synchronous training.
mon_sess.run(train_op)
$ horovodrun -np 4 -H localhost:4 python train.py
2、要在具有4个GPU的4台计算机上运行:$ horovodrun -np 16 -H server1:4,server2:4,server3:4,server4:4 python train.py
3、要在不使用horovodrun包装的情况下使用Open MPI运行,请参阅使用Open MPI运行Horovod。相关软件