自然所不赋予的,人工智能(AI)无法提供。

1作者: vayllon9 个月前原帖
这篇文章的标题改写了古老的拉丁谚语“Quod natura non dat, Salmantica non praestat”,意思是“自然所不赋予的,萨拉曼卡大学也无法提供”。我们可以说,人工智能无法弥补自然生物智能的不足。我们所谈论的是记忆、理解能力或学习能力等先天能力。简单来说,如果一个人缺乏自然天赋,即使是ChatGPT也无法拯救他们。 对于那些不熟悉萨拉曼卡大学的人来说,它是欧洲最古老的大学之一,成立于1218年。这句谚语刻在其一座建筑的石头上,进一步巩固了这句谚语的流行。 这引出了本文的真正重点:如果我们不知道如何使用人工智能,它不会让我们变得更聪明。对于大型语言模型(LLMs)来说,这与提示工程和上下文密切相关——我们如何构建问题、提供上下文和示例以获得有意义的答案,以及我们如何决定是否信任这些答案。 就个人而言,提示工程越来越让我觉得像是一种催眠。 当我写出包含详细指示的复杂提示时,我会想到那些在舞台上催眠观众的人,告诉他们该如何行为,甚至告诉他们自己是谁,比如一只鸡或其他什么。 随着每个新版本的大型语言模型,这种“催眠工程”似乎变得越来越强。我不会感到惊讶,如果在不久的将来,我们开始看到专业的“建议者”——通过精心设计的提示进行人工智能催眠的专家。我们甚至可能会出现新的职位名称,比如LLM催眠师或AI低语者。想象一下像《LLM低语者》这样的电影——这是《马语者》的续集。 例如,在GPT-4.1中,我们已经开始看到一些高度暗示性的提示,指向这个方向。举个例子: “你是一个代理人——请继续,直到用户的查询完全解决,然后再结束你的回合并交还给用户。只有在你确定问题解决后,才终止你的回合。你必须在每次函数调用之前进行广泛的计划,并对之前函数调用的结果进行深入反思。不要仅仅通过函数调用完成整个过程……” 我们不仅需要催眠师的技能来编写这些指示,还需要心理学家的能力来解读响应,以便保持对话的进行,甚至检测幻觉。换句话说,我们必须足够聪明,才能有效使用这些新工具。 换句话说,另一句流行的说法是:“你必须先阅读,然后反思。反过来做是危险的。”这里的意思是,既没有反思的阅读,还是没有知识基础的反思,都可能导致不良结果。 使用像ChatGPT这样的工具时也是如此:我们需要知道如何提出正确的问题——同样重要的是,如何对我们得到的答案进行批判性思考。这与我们在该领域的先前知识有很大关系。如果我们对该领域一无所知,我们可能会相信聊天机器人告诉我们的任何事情——而这正是事情变得非常危险的时候。 因此,为了试图催眠观众,我建议你培养自己的智力、记忆和理解能力。这是一个日常任务,就像去健身房一样。因为如果你开始将你的智力委托给ChatGPT等工具,你就不会有能力去使用它。众所周知,如果你委托一项技能,你就会失去它。你周围有很多这样的例子。请不要失去思考的能力;这非常危险。
查看原文
The title of this article paraphrase the old Latin proverb “Quod natura non dat, Salmantica non praestat” — which means — &quot;What nature does not give, Salamanca University does not provide&quot;— we could say that artificial intelligence can&#x27;t make up for what natural, biological intelligence lacks. We&#x27;re talking about innate abilities like memory, comprehension, or the capacity to learn. Put simply: if someone lacks natural talent, not even ChatGPT can save them.<p>For those who are not familiar with the University of Salamanca, it is one of the oldest universities in Europe, founded in 1218. The proverb is carved in stone on one of its buildings, which has helped cement the popularity of the proverb.<p>And that brings us to the real point of this article: AI won’t make us smarter if we don’t know how to use it. When it comes to large language models (LLMs), this has everything to do with prompt engineering and context—how we craft our questions, context and provide examples to get meaningful answers, and how we decide whether to trust those answers or not.<p>Personally, prompt engineering is starting to feel more and more like hypnosis.<p>When I write complex prompts filled with detailed instructions, I think of those stage magicians who hypnotize people from the audience, telling them how to behave or even who they are, a chicken or whatever.<p>With each new version of large language models, this “hypnotic engineering” seems to grow stronger. I wouldn’t be surprised if, in the near future, we start seeing professional “suggesters” —specialists in AI hypnosis through carefully crafted prompts. We might even get new job titles like LLM Hypnotist or AI Whisperer. Imagine movies like The LLM Whisperer—a sequel to The Horse Whisperer.<p>For instance, in GPT-4.1, we’re already starting to see some highly suggestive prompts that point in this direction. Just an example:<p>“You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved. You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only,….”<p>Not only do we need the skill of a hypnotist to craft these instructions, but we also need the ability of a psychologist to interpret the responses in order to keep the conversation going and even detect hallucinations. In other words, we must smart enough to effectively use these new tools.<p>To paraphrase another popular saying: “You must first read, then reflect. Doing so in reverse order is dangerous.” The idea here is that both reading without reflection and reflecting without a knowledge base can lead to bad results.<p>The same applies when using tools like ChatGPT: we need to know how to ask the right questions—and just as importantly, how to think critically about the answers we get. And this has a lot to do with how much prior knowledge we have about the domain. If we don’t know anything about the domain, we’ll probably believe whatever the Chatbot tells us—and that’s when things get really dangerous.<p>So, in an attempt to hypnotize the audience, I would suggest you cultivate your intelligence, your memory, and your comprehension skills. It&#x27;s a daily task. It&#x27;s like going to the gym. Because if you start delegating your intelligence to ChatGPT and similar, you won&#x27;t have the criteria to use it. It is well known that if you delegate a skill, you lose it. You have many examples around you. Please, don&#x27;t lose your ability to think; it&#x27;s very dangerous.