问HN:最适合用于CLM微调的基础模型是什么?

5作者: philomath86813 天前原帖
你好, 我有一个较大的(2 GB)经过精心挑选的高质量文本语料库,内容为某种低资源语言,我想构建一个为写作者提供高级“自动补全”服务的模型。 我考虑使用仅解码器模型,如Llama、Mistral或Gemma,去掉基于不需要的语言的嵌入层,创建新的嵌入层(可能基于在该语料库上训练的FastText模型初始化),并配合从我的语料库新创建的分词器,然后在我的语料库上训练模型直到收敛。 其他潜在的细节包括:为同义词感知训练定制损失函数(基于定制的高质量同义词词典),在这种情况下,“正确”单词的同义词会得到一定的奖励;使用特定语言的词性标注器对语料库进行词性标注,并为模型添加一个词性标注头作为多任务学习,以强制生成符合语法的文本。 为了能够使用一个好的模型作为基础,我可能会被迫使用PEFT(LoRA)。我目前的设置是Colab Pro+上可用的资源,所以我可能可以使用7b-12b范围的模型? 我主要的问题是,哪个基础模型最适合这个任务?(再次强调,是为了完成各种类型的一般写作,而不是编程或高级推理)。 此外,同义词和词性标注的添加是有帮助还是有害? 还有其他我可能遗漏的内容吗? 谢谢!
查看原文
Hi,<p>I have a largish (2 GB) corpus of curated, high-quality text in some low-resource language, and I want to build a model that would provide an advanced &quot;auto complete&quot; service for writers.<p>I&#x27;m thinking of taking a decoder-only model such as Llama, Mistral or Gemma, slice off the embedding layers (which are based on unneeded languages), create new ones (perhaps initialized based on a FastText model trained on the corpus), paired with a tokenizer newly created from my corpus, then train the model on my corpus until convergence.<p>Additional potential details include: a custom loss function for synonym-aware training (based on a custom high-quality thesaurus), where synonyms of the &quot;correct&quot; word are somewhat rewarded; POS-tagging the corpus with a Language-specific POS-tagger, and add a POS-tagging head to the model as a Multi-task Learning, to force grammatical generation.<p>In order to be able to use a good model as the base, I will probably be forced to use PEFT (LoRA). My current setup is whatever is available on Colab Pro+, so I can probably use the 7b-12b range of models?<p>My main question is, which base model would be best for this task? (Again, for completion of general writing of all kinds, not programming or advanced reasoning).<p>Also, will the synonym and POS additions help or hurt?<p>Anything else I might be missing?<p>Thanks!