请问HN:有没有人使用Mac Studio进行本地AI/LLM?
我很好奇您在配置良好的 M3 Ultra 或 M4 Pro Mac Studio 上运行本地大型语言模型(LLM)的经验。我注意到关于 Mac Studio 用于本地 LLM 的讨论不多,但似乎您可以利用共享的显存将大型模型加载到内存中。我猜测生成令牌的速度可能较慢,但由于可以加载更大的模型到内存中,您可能会获得更高质量的结果。
查看原文
Curious to know your experience running local LLM's with a well spec'ed out M3 Ultra or M4 Pro Mac Studio. I don't see a lot of discussion on the Mac Studio for Local LLMs but it seems like you could put big models in memory with the shared VRAM. I assume that the token generation would be slow, but you might get higher quality results because you can put larger models in memory.