Skip to content

feature/OpenRouter provider added#26

Merged
leo-gan merged 2 commits intoleo-gan:mainfrom
LeoGan:feat/openrouter-provider-refactor
Oct 3, 2025
Merged

feature/OpenRouter provider added#26
leo-gan merged 2 commits intoleo-gan:mainfrom
LeoGan:feat/openrouter-provider-refactor

Conversation

@leo-gan
Copy link
Owner

@leo-gan leo-gan commented Oct 3, 2025

No description provided.

google-labs-jules bot and others added 2 commits October 3, 2025 18:05
This change introduces support for OpenRouter as a new LLM provider and refactors the model provider architecture to be more extensible.

Key changes include:
- Added an `OpenRouterProvider` to `llm_provider.py`.
- Refactored `call_llm.py` to use a factory pattern for instantiating providers, removing the previous `if/elif/else` structure.
- Modified `pyproject.toml` to use optional dependencies (`extras`) for each provider (`google`, `ollama`, `huggingface`, `openrouter`), allowing users to install only the libraries they need.
- Updated `conf.py` to prefix `ModelName` enum members with their provider (e.g., `google_gemini_2_5_flash`) for more robust provider detection.
- Updated `README.md` files for both the core and CLI packages to reflect the new installation instructions and model naming conventions.
- Fixed all tests to align with the new architecture.
@gemini-code-assist
Copy link

Summary of Changes

Hello @leo-gan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility and extensibility of the anonymization tool by integrating OpenRouter as a new LLM provider. It introduces a more modular architecture for interacting with various LLMs, making it easier to add future providers and manage dependencies. The changes also streamline the installation process and update documentation to reflect the new provider and improved structure.

Highlights

  • New LLM Provider Integration: Added support for OpenRouter as a new Large Language Model (LLM) provider, expanding the range of available models for anonymization.
  • Modular LLM Calling Architecture: Refactored the core LLM calling logic into a new llm_provider module, introducing an abstract base class and concrete implementations for each provider (Google, Ollama, Hugging Face, OpenRouter). This improves modularity and maintainability.
  • Updated Installation and Configuration: Revised installation instructions for both the core library and CLI to leverage pip install with optional extras for specific LLM providers. Environment variable handling and model naming conventions were also updated across documentation and configuration files.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@leo-gan leo-gan merged commit 2bd7797 into leo-gan:main Oct 3, 2025
2 checks passed
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the OpenRouter provider and significantly refactors the LLM provider handling. The new architecture, which uses a provider abstraction layer and a factory pattern, is a major improvement that enhances modularity and makes the system more extensible. The dependency management is also cleaner with the use of optional dependencies for each provider. I've identified a couple of areas for improvement, mainly concerning robustness and maintainability in the new provider implementation.

completion = client.chat.completions.create(
model=model_name, messages=[{"role": "user", "content": prompt}]
)
return completion.choices[0].message.content or ""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's a potential IndexError here if the completion.choices list returned by the API is empty. While this is unlikely for a successful call, it's safer to guard against it to prevent the application from crashing. This change ensures the code gracefully handles an empty choices list.

Suggested change
return completion.choices[0].message.content or ""
return completion.choices[0].message.content or "" if completion.choices else ""

Comment on lines 58 to 61
logging.error(
f"Attempt {attempt + 1} failed with JSON decode error: {e}, "
f"response: {response_text[:200]}..."
f"response: {raw_text[:200] if 'raw_text' in locals() else 'N/A'}..."
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The if 'raw_text' in locals() check is redundant. The json.JSONDecodeError exception can only be raised if json.loads() is called, which means raw_text has already been successfully assigned in the try block. You can simplify this logging statement for better readability.

Suggested change
logging.error(
f"Attempt {attempt + 1} failed with JSON decode error: {e}, "
f"response: {response_text[:200]}..."
f"response: {raw_text[:200] if 'raw_text' in locals() else 'N/A'}..."
)
logging.error(
f"Attempt {attempt + 1} failed with JSON decode error: {e}, "
f"response: {raw_text[:200]}..."
)

Comment on lines +120 to +130
def get_provider(provider_name: str) -> LLMProvider:
"""Factory function to get a provider instance."""
if provider_name == "google":
return GoogleProvider()
if provider_name == "ollama":
return OllamaProvider()
if provider_name == "huggingface":
return HuggingFaceProvider()
if provider_name == "openrouter":
return OpenRouterProvider()
raise ValueError(f"Unknown provider: {provider_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The get_provider factory function can be made more scalable and arguably more readable by using a dictionary to map provider names to their corresponding classes. This avoids a growing chain of if statements as more providers are added and is a more common pattern for factories.

Suggested change
def get_provider(provider_name: str) -> LLMProvider:
"""Factory function to get a provider instance."""
if provider_name == "google":
return GoogleProvider()
if provider_name == "ollama":
return OllamaProvider()
if provider_name == "huggingface":
return HuggingFaceProvider()
if provider_name == "openrouter":
return OpenRouterProvider()
raise ValueError(f"Unknown provider: {provider_name}")
def get_provider(provider_name: str) -> LLMProvider:
"""Factory function to get a provider instance."""
provider_map = {
"google": GoogleProvider,
"ollama": OllamaProvider,
"huggingface": HuggingFaceProvider,
"openrouter": OpenRouterProvider,
}
provider_class = provider_map.get(provider_name)
if not provider_class:
raise ValueError(f"Unknown provider: {provider_name}")
return provider_class()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant