问HN:什么会让你认真对待人工智能?
最近,OpenAI宣布他们新开发的一个AI模型/系统在国际数学奥林匹克(IMO)中获得了金牌。IMO是一项非常困难的考试,只有世界上最优秀的高中生才能参加,更不用说获得金牌了。获得金牌的学生通常会继续从事前沿的数学研究,比如陶哲轩,他在2006年获得了菲尔兹奖。还有传言称,DeepMind也用尚未发布的模型取得了同样的成绩。
现在,在一项艰难的数学考试中取得成功并不意味着“自动化所有人类劳动”,但这无疑是许多人认为AI不容易达到的一个基准。尽管如此,许多人声称这并不算什么,并且在可预见的未来,人类仍将远比AI聪明。
我的问题是,如果你属于上述观点,那么是什么因素会让你转变为一种类似于“AI系统有可能变得比人类更聪明,并在单数年份内自动化所有人类劳动和认知输出”的心态?
这是否需要看到一个类人机器人完成某个困难任务?(Metaculus对AGI的定义要求机器人能够令人满意地组装一辆(或相当于一辆)2021年左右的法拉利312 T4 1:8比例模型汽车。)这是否涉及到一个足够严格的图灵测试?我很好奇人们对“好吧,这真的很真实”的个人定义是什么。
查看原文
Recently OpenAI announced an AI model/system they had recently developed won a gold medal at the IMO. The IMO is a very difficult exam, and only the best high schoolers in the world even qualify, let alone win gold. Those who do often go on to cutting edge mathematical research, like Terence Tao, who won the Fields medal in 2006. It has also been rumored that DeepMind achieved the same result with a yet to be released model.<p>Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.<p>My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".<p>Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.