请问HN:关于专业化本地LLM/VLM的反馈

1作者: BlackForest_ai3 个月前原帖
我们推出了 causa™,这是一款能够在苹果设备上完全离线运行和协调大型语言模型(LLM)的应用程序(即将支持 VLM)。 我们正在收集反馈,了解社区认为哪些专业或微调模型在设备端推理中最有价值。我们已经支持主要的通用模型系列(如 GPT-OSS、Llama、Mistral 等),现在正专注于针对特定任务微调的领域特定模型。 <p>示例: • JetBrains 的 Mellum 系列 — 针对软件工程进行了优化 • gpt-oss-safeguard — 专为政策推理和人工智能安全量身定制 <p>如果您知道其他高质量的专业模型(最好是开放权重),可以从移动部署中受益,我们非常欢迎您的建议。
查看原文
We’ve launched causa™, our application that orchestrates and runs LLMs fully offline across Apple devices (VLM support coming soon). We’re collecting feedback on which specialized or fine-tuned models the community would find most valuable for on-device inference. We already support the main general-purpose families (GPT-OSS, Llama, Mistral, etc.) and are now focusing on domain-specific models fine-tuned for targeted tasks.<p>Examples: • Mellum series by JetBrains — optimized for software engineering • gpt-oss-safeguard — tailored for policy reasoning and AI safety<p>If you know of other high-quality specialized models (preferably with open weights) that could benefit from mobile deployment, we’d love your input.