site stats
230k GPUs, including 30k GB200s, are operational for training Grok in a single supercluster called Colossus 1 (inference is done by our cloud providers).At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks.As Jensen
发布时间:
1
数据加载中
Markdown支持
评论加载中...
您可能感兴趣的: 更多