Skip to content

Filtering extracted results #23

@Jack000

Description

@Jack000

this is more of a question than an issue - I noticed that in my scrape there is a large number of spurious results like:

Sorry, we just need to make sure you're not a robot. For best results, please make sure your browser is accepting cookies.

The actual openwebtext corpus seems pretty clean, so I'm wondering what if any heuristics were used to remove these pages, in order to reproduce the openwebtext corpus?

The corpus page mentions post-filtering using fasttext - is this something that will be added to this project at some point?

Finally, the readme implies that bs4 would be a better extractor than newspaper - is that the case? It's not an option in extract_text.py so it's difficult to compare.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions