China-made Moore Threads AI GPUs used for three billion parameter LLM training — MTT S4000 appears competitive against unspecified Nvidia solutions

May 29, 2024


Moore Threads claims to be making great strides in its AI GPU development, with its latest S4000 AI GPU accelerator being exponentially faster than its predecessor. As reported by cnBeta, a training regimen of a new Kua’e Qianka Intelligent Computing Cluster sporting S4000 GPUs ranked third fastest in AI testing, outperforming several counterparts consisting of Nvidia AI GPU clusters.

The benchmark run was taken from a stability test of the Kua’e Qianka Intelligent Computing Cluster. Training took a total of 13.2 days and supposedly ran perfectly with no faults or interruptions for the duration of the run. The AI model used to benchmark the new computing cluster was the MT-infini-3B large language model.

Read More on Tom's Hardware