请问HN:生成大型语言模型的幻觉以检测学生作弊

4作者: peerplexity9 个月前原帖
我在考虑添加一个问题,以引导大型语言模型(LLM)产生幻觉式的回答。这个方法可以用来检测学生的作弊行为。最好的问题应该是学生无法想象出与LLM提供的答案相似的解决方案的那种。有什么建议吗?
查看原文
I am thinking about adding a question that should induce a LLM to hallucinate a response. This method could detect students cheating. The best question should be the one that students could not imagine a solution like the one provided by the LLM. Any hints?