Skip to content

feat: AI-powered contract audit analysis for verified contracts #244

@AugustoL

Description

@AugustoL

Summary

Add an AI-powered contract audit analysis feature that reviews verified smart contract source code and reports potential vulnerabilities, code quality issues, and best-practice violations. The feature is primarily targeted at contracts deployed locally via the Hardhat plugin, where source code and ABIs are readily available, but should also work for any verified contract.

Motivation

Developers using OpenScan with a local Hardhat node during development have immediate access to their contract source code and want fast, actionable security feedback without leaving the explorer. Running an AI audit inline — while the contract is deployed and being tested — dramatically shortens the feedback loop compared to uploading code to external audit tools or waiting for a formal review.

For verified contracts on public networks, the same feature provides a useful first-pass security overview for users interacting with unfamiliar contracts.

Proposed Solution

User Flow

  1. On any contract address page where source code is available (verified or locally imported via Ignition/Hardhat artifacts), show an "AI Audit" button in the contract tab.
  2. When triggered, the source code is sent to a configurable AI provider (user supplies their own API key via Settings → API Keys, following the existing API key pattern).
  3. The audit result is displayed inline on the contract page, structured into sections:
    • Critical / High / Medium / Low / Informational findings
    • Each finding includes: severity badge, affected function/line reference, description, and recommendation.
  4. Results are cached per contract address + source hash to avoid redundant API calls.
  5. Users can re-run the audit to get a fresh analysis.

AI Provider Integration

  • Support multiple providers via user-configured API keys (same Settings → API Keys panel already used for other services).
  • Suggested providers (user picks one): Anthropic Claude, OpenAI GPT-4o, Google Gemini.
  • The prompt should be structured to request JSON-formatted output for reliable parsing.
  • Provider and model should be configurable in Settings.

Hardhat / Local Network Priority

  • When connected to localhost (chain ID 31337), the "AI Audit" button is surfaced more prominently (e.g., in the primary contract action bar rather than a secondary tab).
  • Contracts imported via Ignition Deployment artifacts automatically include source code, making them immediately eligible for audit without additional verification steps.
  • The feature should work even for unverified contracts if the ABI + source were imported manually.

Security and Privacy

  • Source code is sent to the AI provider only on explicit user action (never automatically).
  • A clear disclosure notice is shown before the first audit: "Your contract source code will be sent to [Provider]. Do not audit contracts containing secrets."
  • The disclosure can be permanently acknowledged per provider in Settings.

Alternatives Considered

  • Linking to external audit tools (MythX, Slither, etc.): These require separate accounts, CLIs, or uploads — friction that breaks the local dev workflow.
  • Running static analysis locally (Slither/Mythril WASM): Possible future enhancement, but complex to bundle and maintain. AI analysis is faster to ship and more accessible.
  • Hardcoded audit prompts without user API key: Would require OpenScan to proxy requests, introducing a backend dependency and cost — incompatible with the trustless, standalone design.

Additional Context

  • The app already has an API Keys section in Settings (used for other data providers); the AI provider key fits naturally there.
  • Localhost / Hardhat support already exists: trace support, Ignition artifact import (Settings → Import Ignition Deployment), and chain ID 31337 detection in DataService.
  • The audit result display should follow the existing card/table layout patterns used on the contract and transaction pages.
  • Large contracts (>100KB source) may need chunking or a warning about token limits.

Acceptance Criteria

  • An "AI Audit" button appears on contract pages where source code is available
  • The button is prominently surfaced for localhost (chain ID 31337) contracts
  • Contracts imported via Hardhat/Ignition artifacts are automatically eligible for audit
  • The user can configure their AI provider API key in Settings → API Keys
  • A disclosure notice is shown before the first audit per provider, with a permanent acknowledgement option
  • Audit results are displayed inline, grouped by severity (Critical / High / Medium / Low / Informational)
  • Each finding shows: severity, affected location, description, and recommendation
  • Results are cached per contract address + source hash; a Re-run button forces a fresh audit
  • At least two AI providers are supported (e.g., Anthropic Claude and OpenAI)
  • Large contracts trigger a warning if they may exceed provider token limits
  • All UI strings use the i18n system; translations added for en and es
  • TypeScript type checking passes with no errors
  • Formatting and linting pass (npm run format:fix, npm run lint:fix)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions