Keep on to blur preview images; turn off to show them clearly

RL and distributed training • eXperiments lab


RL and distributed training • eXperiments lab


RL and distributed training • eXperiments lab


if tile-based primitives generalize across nvidia, apple and amd, then a unified, high perf programming model for multiple accelerators is plausible. that weakens the “cuda moat” and opens the ai compute landscape. https://t.co/cT1W7yUwDx


RL and distributed training • eXperiments lab


RL and distributed training • eXperiments lab
