Skip to content

有没有比较完备的BERT预训练的中文语言模型? #3

@liyushihaoren2

Description

@liyushihaoren2

BERT官方给的中文预训练模型的tokenize是字级别的,没有考虑n_gram的信息,请问有没有大佬做i过考虑了ngram等其他信息的更完备的BERT预训练模型

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions