Skip to content

Feature/huggingface models#25

Merged
leo-gan merged 3 commits intoleo-gan:mainfrom
LeoGan:feature/huggingface-models
Oct 3, 2025
Merged

Feature/huggingface models#25
leo-gan merged 3 commits intoleo-gan:mainfrom
LeoGan:feature/huggingface-models

Conversation

@leo-gan
Copy link
Owner

@leo-gan leo-gan commented Oct 3, 2025

No description provided.

google-labs-jules bot and others added 3 commits October 3, 2025 02:41
This commit introduces support for using Hugging Face's hosted Inference API to anonymize documents.

Key changes:
- Added `huggingface-hub` as a dependency.
- Updated `ModelProvider` and `ModelName` enums to include Hugging Face models.
- Implemented the logic to call the Hugging Face API using `InferenceClient`.
- Updated the CLI's `README.md` to include instructions for the `HUGGING_FACE_TOKEN` environment variable and list the new models.
This commit introduces support for using Hugging Face's hosted Inference API to anonymize documents.

Key changes:
- Added `huggingface-hub` as a dependency.
- Updated `ModelProvider` and `ModelName` enums to include Hugging Face models.
- Implemented the logic to call the Hugging Face API using `InferenceClient`.
- Updated the CLI's `README.md` to include instructions for the `HUGGING_FACE_TOKEN` environment variable, a link to obtain the token, and a list of the new models.
@gemini-code-assist
Copy link

Summary of Changes

Hello @leo-gan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the PDF Anonymizer application by integrating Hugging Face models as a new provider for Large Language Models. This allows users to leverage a broader range of LLMs for anonymization tasks, providing greater flexibility and choice. The changes include updates to core logic for model interaction, new configuration requirements, and comprehensive documentation to guide users through the setup and usage of these new models.

Highlights

  • Hugging Face Model Integration: The application now supports Large Language Models from Hugging Face, expanding the available options for anonymization beyond Google Gemini and Ollama.
  • New Configuration for Hugging Face: A new environment variable, HUGGING_FACE_TOKEN, has been introduced and documented in .env.template and README.md files to authenticate with Hugging Face models.
  • Core Logic Updates: The pdf-anonymizer-core package has been updated to include huggingface_hub as a dependency and modify the LLM calling logic to interact with Hugging Face models.
  • Version Bumps and Dependency Management: All relevant project versions have been incremented to 0.3.1, and huggingface-hub and protobuf have been added as core dependencies, reflected in pyproject.toml and uv.lock.
  • Documentation and Examples: README files across the project have been updated to reflect the new Hugging Face model support, including example model names and an updated run.sh script demonstrating its usage.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@leo-gan leo-gan merged commit dc52bb1 into leo-gan:main Oct 3, 2025
2 checks passed
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for using Hugging Face models for anonymization. The changes include updating dependencies, adding configuration for Hugging Face models, and implementing the client logic to call the Hugging Face Inference API. The documentation and environment variable templates have also been updated accordingly.

My review focuses on improving the robustness, performance, and maintainability of the new Hugging Face integration. I've suggested adding validation for the Hugging Face token, optimizing client instantiation to avoid re-creation in retry loops, and refactoring the response parsing logic for better readability. I also pointed out a potential fragility in how model providers are determined.

Comment on lines +67 to +73
elif model_enum.provider == ModelProvider.HUGGINGFACE:
client = InferenceClient(
model=model_name, token=os.getenv("HUGGING_FACE_TOKEN")
)
response = client.chat_completion(
messages=[{"role": "user", "content": prompt}],
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

It's good practice to validate the presence of the HUGGING_FACE_TOKEN before attempting to use it. This provides a clearer error message to the user if the token is missing. The current implementation would rely on the InferenceClient to fail, which might result in a less specific error message and unnecessary retries.

            elif model_enum.provider == ModelProvider.HUGGINGFACE:
                token = os.getenv("HUGGING_FACE_TOKEN")
                if not token:
                    raise ValueError(
                        "HUGGING_FACE_TOKEN environment variable not set. Please set it in your .env file."
                    )
                client = InferenceClient(model=model_name, token=token)
                response = client.chat_completion(
                    messages=[{"role": "user", "content": prompt}],
                )

Comment on lines +68 to +70
client = InferenceClient(
model=model_name, token=os.getenv("HUGGING_FACE_TOKEN")
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For performance optimization, consider instantiating the InferenceClient outside the for loop. Creating a new client on every retry attempt is inefficient. This observation also applies to the genai.Client() instantiation in the else block.

Comment on lines +125 to +133
elif model_enum.provider == ModelProvider.HUGGINGFACE:
if (
response
and hasattr(response, "choices")
and response.choices
and hasattr(response.choices[0], "message")
and hasattr(response.choices[0].message, "content")
):
return response.choices[0].message.content or ""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This series of checks can be simplified using a try...except block, which is more idiomatic in Python (EAFP - Easier to Ask for Forgiveness than Permission). This makes the code cleaner and easier to read. The try...except block will gracefully handle cases where response is None, or any of the nested attributes (choices, message, content) or list indices are missing.

    elif model_enum.provider == ModelProvider.HUGGINGFACE:
        try:
            return response.choices[0].message.content or ""
        except (AttributeError, IndexError, TypeError):
            return ""

Comment on lines +40 to +41
if "/" in self.value:
return ModelProvider.HUGGINGFACE

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Relying on the presence of a '/' in the model name to determine the provider is a bit fragile. While it works for the current Hugging Face models, this heuristic might break in the future if other providers adopt a similar naming scheme or if Hugging Face model names change. A more robust approach would be to explicitly map each model to its provider, for example, using a dictionary. This would make the code more maintainable and less prone to errors.

@leo-gan leo-gan deleted the feature/huggingface-models branch October 3, 2025 03:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant