-
Notifications
You must be signed in to change notification settings - Fork 87
Open
Description
this is more of a question than an issue - I noticed that in my scrape there is a large number of spurious results like:
Sorry, we just need to make sure you're not a robot. For best results, please make sure your browser is accepting cookies.
The actual openwebtext corpus seems pretty clean, so I'm wondering what if any heuristics were used to remove these pages, in order to reproduce the openwebtext corpus?
The corpus page mentions post-filtering using fasttext - is this something that will be added to this project at some point?
Finally, the readme implies that bs4 would be a better extractor than newspaper - is that the case? It's not an option in extract_text.py so it's difficult to compare.
Metadata
Metadata
Assignees
Labels
No labels