问HN:对GPT-5.4等的实际规模/架构有什么有根据的猜测吗?

1作者: dsrtslnd2327 天前原帖
有没有人对像GPT-5.4、Gemini 3.1和Opus 4.6这样的庞大模型的规模有合理的直觉或确凿的线索?它们与最佳的开源模型如GLM-5相比如何?<p>它们现在的规模大致相同吗(例如大约在1万亿参数左右,可能是MoE),还是封闭模型仍然要大得多?<p>我也对“专业”版本如GPT-5.4 Pro感到好奇——这可能是一个不同的模型,还是主要是同一个模型,只是在推理时计算量更大、推理时间更长或协调能力更强?
查看原文
Does anyone have decent intuitions or hard clues on how big models like GPT-5.4, Gemini 3.1, and Opus 4.6 actually are, and how they compare to the best open models like GLM-5?<p>Are they all roughly in the same range now (for example around 1T params, maybe MoE), or are the closed models still much bigger?<p>Also curious about “pro” versions like GPT-5.4 Pro - is that likely a different model, or mostly the same model with more inference-time compute &#x2F; longer reasoning &#x2F; better orchestration?