Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TrafilaturaExtractor class #431

Merged
merged 10 commits into from
Mar 6, 2025
3 changes: 3 additions & 0 deletions docs/user-guide/api/download.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,9 @@ Common Crawl
.. autoclass:: nemo_curator.download.ResiliparseExtractor
:members:

.. autoclass:: nemo_curator.download.TrafilaturaExtractor
:members:

------------------------------
Wikipedia
------------------------------
Expand Down
11 changes: 7 additions & 4 deletions docs/user-guide/download.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ the extraction step to limit the amount of documents that undergo this heavy com
NeMo Curator provides example utilities for downloading and extracting Common Crawl, ArXiv, and Wikipedia data.
In addition, it provides a flexible interface to extend the utility to other datasets.
Our Common Crawl example demonstrates how to process a crawl by downloading the data from S3, doing preliminary language filtering with pyCLD2,
and extracting the relevant text with jusText or Resiliparse to output :code:`.jsonl` files.
and extracting the relevant text with jusText, Resiliparse, or Trafilatura to output :code:`.jsonl` files.

NeMo Curator currently does not provide out-of-the-box support for web-crawling or web-scraping.
It provides utilities for downloading and extracting data from the preexisting online sources given above.
Expand Down Expand Up @@ -88,6 +88,7 @@ You can choose to modify the HTML text extraction algorithm used in ``download_c
from nemo_curator import get_client
from nemo_curator.download import (
ResiliparseExtractor,
TrafilaturaExtractor,
download_common_crawl,
)
from nemo_curator.datasets import DocumentDataset
Expand All @@ -106,8 +107,10 @@ You can choose to modify the HTML text extraction algorithm used in ``download_c
output_type = "jsonl"
os.makedirs(output_folder, exist_ok=True)
# Change the extraction algorithm to use ResiliparseExtractor
# Change the extraction algorithm to Resiliparse
extraction_algorithm = ResiliparseExtractor()
# Alternatively, change the extraction algorithm to Trafilatura
# extraction_algorithm = TrafilaturaExtractor()
# Download and extract the Common Crawl data using the Resiliparse extraction algorithm.
# The function returns a DocumentDataset that contains the extracted documents.
Expand All @@ -128,15 +131,15 @@ You can choose to modify the HTML text extraction algorithm used in ``download_c
if __name__ == "__main__":
main()
Above, we changed the extraction algorithm from the default ``JusTextExtractor``.
Above, we changed the extraction algorithm from the default ``JusTextExtractor``. **Note:** The JusTextExtractor, ResiliparseExtractor, and TrafilaturaExtractor classes each have their own unique parameters which are specific to their extraction algorithms. Please see the docstrings for each class for more details.

The return value ``common_crawl`` will be in NeMo Curator's standard ``DocumentDataset`` format. Check out the function's docstring for more parameters you can use.

NeMo Curator's Common Crawl extraction process looks like this under the hood:

1. Decode the HTML within the record from binary to text.
2. If the HTML can be properly decoded, then with `pyCLD2 <https://github.com/aboSamoor/pycld2>`_, perform language detection on the input HTML.
3. Finally, the extract the relevant text with `jusText <https://github.com/miso-belica/jusText>`_ or `Resiliparse <https://github.com/chatnoir-eu/chatnoir-resiliparse>`_ from the HTML and write it out as a single string within the 'text' field of a json entry within a `.jsonl` file.
3. Finally, the extract the relevant text with `jusText <https://github.com/miso-belica/jusText>`_, `Resiliparse <https://github.com/chatnoir-eu/chatnoir-resiliparse>`_, or `Trafilatura <https://trafilatura.readthedocs.io/en/latest/>`_ from the HTML and write it out as a single string within the 'text' field of a json entry within a `.jsonl` file.
* ``download_wikipedia`` will download and extract the latest wikipedia dump. Files are downloaded using ``wget``. Wikipedia might download slower than the other datasets. This is because they limit the number of downloads that can occur per-ip address.

.. code-block:: python
Expand Down
2 changes: 2 additions & 0 deletions nemo_curator/download/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
CommonCrawlWARCIterator,
JusTextExtractor,
ResiliparseExtractor,
TrafilaturaExtractor,
download_common_crawl,
)
from .doc_builder import (
Expand Down Expand Up @@ -54,6 +55,7 @@
"CommonCrawlWARCDownloaderExtractOnly",
"JusTextExtractor",
"ResiliparseExtractor",
"TrafilaturaExtractor",
"download_wikipedia",
"WikipediaDownloader",
"WikipediaIterator",
Expand Down
152 changes: 150 additions & 2 deletions nemo_curator/download/commoncrawl.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -17,6 +17,7 @@
import subprocess
import unicodedata
from abc import ABC, abstractmethod
from copy import deepcopy
from typing import Literal, Optional
from urllib.parse import urlparse

Expand All @@ -25,6 +26,8 @@
import pycld2 as cld2
from charset_normalizer import detect
from resiliparse.extract.html2text import extract_plain_text
from trafilatura import extract as extract_with_trafilatura
from trafilatura.settings import DEFAULT_CONFIG as TRAFILATURA_DEFAULT_CONFIG
from warcio.archiveiterator import ArchiveIterator

from nemo_curator.datasets import DocumentDataset
Expand Down Expand Up @@ -92,6 +95,26 @@ def __init__(
"""
Initialize the jusText text extraction algorithm with specified parameters.
jusText is a tool for removing boilerplate content, such as navigation links, headers, and footers from HTML pages.
It is designed to preserve mainly text containing full sentences and it is therefore well suited for creating linguistic resources such as Web corpora.
The key idea is that long blocks can often be classified with high confidence, while shorter blocks require context-based adjustments.
Here is an overview of the jusText algorithm:
• Segmentation: The document is split into textual blocks based on HTML tags that typically define separate sections (e.g., <div>, <p>, <table>).
• Preprocessing: Contents of <header>, <style>, and <script> tags are removed.
Certain elements (e.g., <select>, copyright symbols) are immediately classified as boilerplate.
• Context-Free Classification: Each block is classified as:
- Bad (boilerplate) if it has high link density.
- Short if it is too small to be classified reliably.
- Near-Good if it has a moderate density of stopwords.
- Good (main content) if it is long and contains many stopwords.
• Context-Sensitive Classification: Blocks that were classified as short or near-good are reclassified based on surrounding blocks.
The assumption is that main content clusters together, as does boilerplate.
• Headings Processing: Header elements (e.g., <h1>, <h2>) are treated separately to ensure useful headings are preserved.
Short headers near good content may be reclassified as near-good or good.
Please refer to the jusText documentation for more details: https://corpus.tools/wiki/Justext/Algorithm
Args:
length_low: Minimum length of text to be considered for extraction.
length_high: Maximum length of text to be considered for extraction.
Expand Down Expand Up @@ -165,6 +188,18 @@ def __init__(
"""
Initialize the Resiliparse text extraction algorithm with specified parameters.
The Resiliparse algorithm extracts structural or semantic information from noisy raw web data for further processing,
such as (main) content extraction / boilerplate removal, schema extraction, general web data cleansing, and more.
It is implemented via the `extract_plain_text` function in the `resiliparse.extract.html2text` module.
Resiliparse HTML2Text is a very fast and rule-based plain text extractor for HTML pages which uses the Resiliparse DOM parser.
The `extract_plain_text` function extracts all visible text nodes inside the HTML document's <body>.
Only <script>, <style> and a few other (generally) invisible elements are skipped and very basic ASCII formatting is applied.
Please refer to the Resiliparse documentation for more details: https://resiliparse.chatnoir.eu/en/latest/man/extract/html2text.html
NeMo Curator has added a stopword density filter to the Resiliparse extraction process, which requires that a paragraph contains a certain proportion of stopwords.
Args:
required_stopword_density: Proportion of stopwords required preserve an extracted paragraph.
Studies on stopword lists and their distribution in various text corpora often
Expand Down Expand Up @@ -200,6 +235,118 @@ def extract_text(self, html, stop_words):
return result


class TrafilaturaExtractor(HTMLExtractorAlgorithm):
def __init__(
self,
required_stopword_density=0.32,
min_extracted_size=250,
min_extracted_comm_size=1,
min_output_size=1,
min_output_comm_size=1,
max_tree_size=None,
min_duplcheck_size=100,
max_repetitions=2,
**extract_kwargs,
):
"""
Initialize the Trafilatura text extraction algorithm with specified parameters.
The Trafilatura extraction process combines readability-lxml and jusText as fallbacks to ensure robustness.
Trafilatura's own algorithm follows a cascade of rule-based filters and content heuristics:
• Content Delimitation: Uses XPath expressions to exclude unwanted HTML elements (e.g., navigation bars) and focus on relevant content (e.g., article body).
Extracted HTML nodes are analyzed for relevance based on element type, text length, and link density.
• Fallback Mechanism: If extraction seems faulty, alternative algorithms are run as backups.
These use heuristics like line length, text-to-markup ratio, and HTML depth to improve extraction.
Outputs are compared, prioritizing longer extractions with fewer impurities.
• Baseline Extraction: If all else fails, it searches for text elements that might have been missed, discarding irrelevant content.
The system balances precision and recall, extracting main text, comments, and metadata (title, site name, author, date, categories, tags).
Please refer to the Trafilatura documentation for more details:
https://trafilatura.readthedocs.io/en/latest/ and https://aclanthology.org/2021.acl-demo.15/
NeMo Curator has added a stopword density filter to the Trafilatura extraction process, which requires that a paragraph contains a certain proportion of stopwords.
Args:
required_stopword_density: Proportion of stopwords required preserve an extracted paragraph.
Studies on stopword lists and their distribution in various text corpora often
suggest that around 30-40% of a typical English text consists of stopwords.
min_extracted_size: Acceptable size in characters (used to trigger fallbacks).
Defaults to 250. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
min_extracted_comm_size: Works the same as min_output_comm_size for comment extraction.
Defaults to 1. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
min_output_size: Absolute acceptable minimum for main text output.
Defaults to 1. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
min_output_comm_size: Works the same as min_output_comm_size for comment extraction.
Defaults to 1. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
max_tree_size: Used to discard documents with too many elements. Defaults to None.
min_duplcheck_size: Minimum size in characters to run deduplication on.
Defaults to 100. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
max_repetitions: Maximum number of duplicates allowed.
Defaults to 2. See Trafilatura documentation: https://trafilatura.readthedocs.io/en/latest/settings.html.
extract_kwargs: Additional keyword arguments for the Trafilatura extract function.
See API documentation https://trafilatura.readthedocs.io/en/latest/corefunctions.html#extract
for list of possible parameters.
All arguments are set to their default values, except for deduplicate (bool) which is set to True.
"""
self.required_stopword_density = required_stopword_density
self.min_extracted_size = min_extracted_size
self.min_extracted_comm_size = min_extracted_comm_size
self.min_output_size = min_output_size
self.min_output_comm_size = min_output_comm_size
self.max_tree_size = max_tree_size
self.min_duplcheck_size = min_duplcheck_size
self.max_repetitions = max_repetitions
self.extract_kwargs = extract_kwargs

def extract_text(self, html, stop_words):
trafilatura_config = deepcopy(TRAFILATURA_DEFAULT_CONFIG)
trafilatura_config["DEFAULT"]["MIN_EXTRACTED_SIZE"] = str(
self.min_extracted_size
)
trafilatura_config["DEFAULT"]["MIN_EXTRACTED_COMM_SIZE"] = str(
self.min_extracted_comm_size
)
trafilatura_config["DEFAULT"]["MIN_OUTPUT_SIZE"] = str(self.min_output_size)
trafilatura_config["DEFAULT"]["MIN_OUTPUT_COMM_SIZE"] = str(
self.min_output_comm_size
)
if self.max_tree_size:
trafilatura_config["DEFAULT"]["MAX_TREE_SIZE"] = str(self.max_tree_size)
trafilatura_config["DEFAULT"]["MIN_DUPLCHECK_SIZE"] = str(
self.min_duplcheck_size
)
trafilatura_config["DEFAULT"]["MAX_REPETITIONS"] = str(self.max_repetitions)

# Recommended to set deduplicate=True
self.extract_kwargs.setdefault("deduplicate", True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you mention in the docstring that this happens?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, thanks!


text = extract_with_trafilatura(
html, config=trafilatura_config, **self.extract_kwargs
)

if text is not None:
paragraphs = list(filter(None, text.split("\n")))
result = []
for paragraph in paragraphs:
words = paragraph.split()
length = len(words)
if length == 0:
continue
stopwords = [word for word in words if word in stop_words]
stopword_density = len(stopwords) / length

if stopword_density >= self.required_stopword_density:
result.append(paragraph)
else:
return None

if len(result) == 0:
return None
return result


def get_stop_list_dict(languages=[]):

# Name mapping for language names from CLD2 (values)
Expand Down Expand Up @@ -387,7 +534,8 @@ def download_common_crawl(
end_snapshot (str): Identifier for the latest snapshot to process, which must be chronologically after start_snapshot.
output_type (Literal["jsonl", "parquet"]): The file format for the extracted output. Must be either "jsonl" or "parquet".
• This is not used for the output file, but is used to check if an extracted output already exists.
algorithm: The text extraction algorithm instance (e.g., JusTextExtractor or ResiliparseExtractor) to use for HTML processing.
algorithm: The text extraction algorithm instance to use for HTML processing.
• This can be a JusTextExtractor (default), ResiliparseExtractor, or TrafilaturaExtractor object.
news (bool): When True, indicates that URLs should be retrieved from the CC-NEWS dataset.
• This also means snapshot identifiers should follow the 'YYYY-MM' format.
aws (bool): If True, downloads are sourced from Common Crawl's S3 bucket using s5cmd;
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ dependencies = [
"resiliparse",
"sentencepiece",
"spacy>=3.6.0, <3.8.0",
"trafilatura",
"transformers>=4.48.0",
"unidic-lite==1.0.8",
"usaddress==0.5.10",
Expand Down
Loading