Skip to content

Conversation

@yonigozlan
Copy link
Member

What does this PR do?

Fixes #41816.

Also standardizes the few remaining processors left to standardize.

Comment on lines -77 to -80
attributes = ["feature_extractor", "tokenizer"]
feature_extractor_class = "WhisperFeatureExtractor"
tokenizer_class = "Qwen2TokenizerFast"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cc @eustlb , moved this to the auto files to have one source of truth, and removed attributes as they are now auto-detected



class LasrProcessor(ProcessorMixin):
tokenizer_class = "ParakeetTokenizerFast"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cc @eustlb Same here, although is this supposed to be Parakeet or lasr tokenizer by default?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native-support for several processors of the same type is a great feature to have! I am not sure though if the current auto-behavior will work well, it is likely to raise issues with power users who don't follow the "patterns" from this PR

We could handle it gracefully when no config found or names don't match instead of raising an error, WDYT?

Comment on lines +134 to +150
def _get_modality_for_attribute(attribute_name: str) -> str:
"""
Get the canonical modality type for a given attribute name.
For example:
- "image_processor" -> "image_processor"
- "encoder_image_processor" -> "image_processor"
- "text_tokenizer" -> "tokenizer"
- "my_feature_extractor" -> "feature_extractor"
"""
for modality in MODALITY_TO_AUTOPROCESSOR_MAPPING.keys():
if modality in attribute_name:
return modality
raise ValueError(
f"Cannot determine modality for attribute '{attribute_name}'. "
f"Attribute name must contain one of: {list(MODALITY_TO_AUTOPROCESSOR_MAPPING.keys())}"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo this is simplified a lot. Users do not always call attributes following the pattern and also might want to use their own processing classes. There is a lot of inheritance and patching in custom code afaik, which can't be simplified to _get_modality_for_attribute imo

Copy link
Member Author

@yonigozlan yonigozlan Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_get_modality_for_attribute is called on "attributes" names obtain with cls.get_attributes(), which are already filtered "attributes" corresponding to the sub-processors. So I think this should be fine, unless we want users being able to define sub-processors which variable name don't contain the sub-processor type, but I don't know why we would want that, and how we would detect what kind of sup-processor they are in that case.

I guess attributes is a misnomer here really, and we should maybe change it here to subprocessors, but it was named this before the refactor so I didn't want to change it to not break bc. It might be wort changing it though as it's causing a lot of confusion

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, the naming is indeed causing confusion. Probably I am thinking about too niche cases, as I haven't personally seen many power users using a custom processor. I just realized that were always strict with processor args/kwargs at initialization time 😄

Comment on lines +1548 to +1552
else:
raise ValueError(
f"Cannot find config for {sub_processor_type} in processor_config.json. "
f"Available keys: {list(processor_dict.keys())}"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also not sure it's a good idea to raise an error if the attribute has no config dict. One possible use-case is when a processor has optional attributes that are not available on purpose (see #40447)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understood the use case in the issue linked :(, do you have an example? This code path would only be used by processor_dict that corresponds to a sub-processor

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, nvm. It's not breaking BC and I was thinking about a new feat which we don't yet have

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oke, since functionality-wise we are doing the same as before and I don't see possible edge cases, approving the PR! Great job, multiple subprocessors will be super helpful for multimodal models

@yonigozlan yonigozlan force-pushed the support-multi-sub-proc branch from 51fc687 to fde1c81 Compare December 9, 2025 16:46
@github-actions
Copy link
Contributor

github-actions bot commented Dec 9, 2025

[For maintainers] Suggested jobs to run (before merge)

run-slow: audioflamingo3, auto, fuyu, lasr, phi4_multimodal, pix2struct, pixtral

@github-actions
Copy link
Contributor

github-actions bot commented Dec 9, 2025

View the CircleCI Test Summary for this PR:

https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=42667&sha=ed400c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Processor saving does not work when multiple tokenizers

4 participants