A web application that uses AI (Google Gemini) to analyze and determine if shell commands are safe to execute.
- Input any shell command for analysis
- AI-powered explanation of what the command does
- Safety assessment (Safe, Potentially Dangerous, Extremely Dangerous)
- Detailed risk analysis
- Recommendations for safe execution
- Choice between fast (lite) model for quick results or normal model for more accuracy
- Clean output display after analysis
- Uses the Gemini 2.5 Flash models
- Node.js (v16 or higher)
- A Google Gemini API key (get one from Google AI Studio)
-
Clone or download this repository:
git clone <repository-url> cd command-safety-checker
-
Install dependencies:
npm install
-
Set up your environment variables:
- Copy
.env.exampleto.env:cp .env.example .env
- Edit
.envand replaceyour_api_key_herewith your actual Gemini API key
- Copy
-
Start the development server:
npm run dev
-
Open your browser and navigate to
http://localhost:5173
- Select the model type (Fast & Less Accurate or Normal & More Accurate)
- Input the command you want to analyze in the large text box
- Click "Analyze Command"
- Review the AI's explanation and safety assessment
- Follow the recommendations before executing the command
- Click "New Analysis" to run another check
- Your API key is stored in the environment and accessed securely through the Vite build system
- Always exercise caution before executing unfamiliar commands, even after AI analysis
- The AI analysis is not foolproof and should be used as a supplementary safety measure
- React (v18) with TypeScript
- Google Generative AI SDK
- Bootstrap for UI styling
- Vite as the build tool
- Uses the efficient Gemini 2.5 Flash model for quick analysis