问HN:如何使用“深度研究”?

3作者: muddi900大约 1 个月前原帖
我从一开始就对大型语言模型(llms)存在的问题是信息的杂乱无章。与普遍看法相反,在ChatGPT出现之前,开放网络上的信息大多都是杂乱无章的。我们以前称之为SEO博客垃圾信息。而所有的llms都是在这些内容上训练出来的。 因此,当我尝试谷歌的Gemini深度研究时,我遇到了同样的问题。它基本上是一个典型的llm聊天回复,但更加冗长,并且引用了同样的博客垃圾信息列表,这使得普通人进行研究变得困难。 我该如何避免这个陷阱?所以我想问的是,我该如何使用深度研究?
查看原文
The issue I have had with llms since the start has been the slop. Contrary to popular belief, the open web was mostly slop before ChatGPT. We just used to call it SEO blogspam. And all llms are trained on them.<p>So when I tried google&#x27;s Deep Research in Gemini, I ran into the same problem. It was basically a typical llm chat response, but a lot more verbose and citations to the same blogspam listicles that makes regular human research difficult.<p>How do I avoid this pitfall? So what I am asking is, how do I use deep research?