请问HN:你们如何查看ChatGPT对你们产品的评价?
我一直在测试AI模型如何回应关于SaaS产品的买家风格问题,涉及ChatGPT、Claude、Perplexity和Gemini。
有几件事让我感到惊讶:
- 一款在GitHub上拥有31K星标的产品,ChatGPT几乎从未提及其核心用例。
- 一家资金充足的公司在其类别的通用购买查询中完全没有存在感。
- 产品被描述为竞争对手的特性集,因为模型似乎知道这个类别,但对具体产品的了解不够清晰。
我注意到的最大问题是,大多数创始人实际上从未检查过这些模型在买家对话中对他们的描述。因此,我很好奇这里的其他人是如何处理这个问题的:
- 你们是手动测试吗?
- 你们有可重复的提示集吗?
- 发布新内容后,你们是否看到任何实际的变化?
- 你们有没有找到可靠的方法来判断模型为什么会错误地描述你的产品,而不仅仅是注意到这一点?
查看原文
I’ve been testing how AI models respond to buyer-style questions about SaaS products across ChatGPT Claude Perplexity and Gemini<p>A few things surprised me<p>A product with 31K GitHub stars that ChatGPT basically never surfaced for its core use case<p>A well-funded company with zero presence on generic buying queries in its category<p>Products being described using a competitor’s feature set because the model seemed to know the category but not the product distinctly<p>The biggest thing I keep noticing is that most founders never actually check what these models say about them in buyer conversations<p>So I’m curious how other people here are approaching this<p>Do you test it manually
Do you have a repeatable prompt set
Have you seen anything actually move after publishing new content
And have you found any reliable way to tell why a model is getting your product wrong instead of just noticing that it is