You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a proposal to refactor openrisk's current structure, which currently depends directly on the OpenAI GPT-4o model. The idea is to explore integrating the LLM Provider library from the aix project, so that openrisk could support different LLM providers such as Ollama in addition to OpenAI.
Proposed Approach:
One possible way to approach this might be:
Refactor the aix project into a reusable library (e.g., move the Provider logic from internal to pkg and apply the factory pattern).
Add the refactored aix as a dependency in openrisk's go.mod.
Add --provider and --model flags to the openrisk CLI.
Modify the core logic of openrisk to create an LLM Provider instance using aix's provider.New() factory function and request the risk score via the injected Provider interface.
Dependency:
This work would be dependent on the completion (merge) of the Provider library refactoring in the aix project. (Related PR: aix#382)
Potential Benefits:
Flexibility: Users could select the LLM provider that best suits their needs (e.g., OpenAI, Ollama, etc.).
Security & Privacy: Local providers would ensure sensitive scan results never leave the user’s environment.
Cost-Effectiveness: Users could run openrisk without API costs by using locally-hosted models.
Extensibility: When new LLM providers (e.g., Gemini, Claude) are added to the aix library in the future, openrisk would support them immediately without any code changes.
Code Reusability: This would avoid duplicating LLM integration logic across multiple tools in the projectdiscovery ecosystem, allowing it to be managed centrally through aix.
We would love to get feedback on this proposal. Does this approach make sense?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
This is a proposal to refactor
openrisk's current structure, which currently depends directly on the OpenAI GPT-4o model. The idea is to explore integrating the LLM Provider library from theaixproject, so thatopenriskcould support different LLM providers such as Ollama in addition to OpenAI.Proposed Approach:
One possible way to approach this might be:
aixproject into a reusable library (e.g., move theProviderlogic frominternaltopkgand apply the factory pattern).aixas a dependency inopenrisk'sgo.mod.--providerand--modelflags to theopenriskCLI.openriskto create an LLM Provider instance usingaix'sprovider.New()factory function and request the risk score via the injectedProviderinterface.Dependency:
This work would be dependent on the completion (merge) of the
Providerlibrary refactoring in theaixproject. (Related PR: aix#382)Potential Benefits:
openriskwithout API costs by using locally-hosted models.aixlibrary in the future,openriskwould support them immediately without any code changes.projectdiscoveryecosystem, allowing it to be managed centrally throughaix.We would love to get feedback on this proposal. Does this approach make sense?
Beta Was this translation helpful? Give feedback.
All reactions