-
Notifications
You must be signed in to change notification settings - Fork 0
Guide ML LLM
GitHub Actions edited this page Jan 25, 2026
·
2 revisions
Lerne, wie du KI/ML-Features in VelinScript verwendest.
Hinweis: Für detaillierte Informationen zum Model Training siehe ML Training Tutorial.
let loader: ModelLoader = ModelLoader.new();
loader.loadModel("sentiment", "sentiment", "models/sentiment.onnx");
loader.loadModel("classifier", "classification", "models/classifier.onnx");
@AI(model: "sentiment")
@POST("/api/analyze/sentiment")
fn analyzeSentiment(text: string): SentimentResult {
let loader: ModelLoader = ModelLoader.new();
let prediction = loader.predict("sentiment", text);
return SentimentResult {
text: text,
sentiment: prediction,
};
}
Die LLM-Integration ist nun vollständig implementiert und unterstützt echte API-Calls.
@POST("/api/chat")
fn chat(message: string): string {
// Unterstützte Provider: "openai", "anthropic", "gemini", "local"
// "local" Modus simuliert Antworten für Tests ohne API-Kosten
let llm: LLMClient = LLMClient.new("openai", "api-key");
// Asynchroner Aufruf via HTTP Client
let result = await llm.generate(message);
return result;
}
@POST("/api/embed")
fn embed(text: string): List<number> {
let llm: LLMClient = LLMClient.new("openai", "api-key");
// Generiert echte Vektor-Embeddings (z.B. 1536 Dimensionen für OpenAI)
return llm.embed(text);
}
@POST("/api/documents")
fn createDocument(text: string): Document {
let llm = LLMClient::new(LLMProvider::OpenAI, getApiKey());
let embedding = llm.embed(text);
let db = VectorDB::new(VectorDBProvider::Pinecone, getConnectionString());
let doc = Document {
id: generateId(),
text: text,
embedding: embedding,
};
db.upsert("documents", doc.id, doc.embedding);
return db.save(doc);
}
@POST("/api/documents/search")
fn searchDocuments(query: string): List<Document> {
let llm = LLMClient::new(LLMProvider::OpenAI, getApiKey());
let query_embedding = llm.embed(query);
let db = VectorDB::new(VectorDBProvider::Pinecone, getConnectionString());
let results = db.search("documents", query_embedding, 10);
return results.map(|r| db.find(Document, r.id));
}
@POST("/api/train")
fn trainModel(modelName: string): void {
let mut training = TrainingService::new();
// Training Data hinzufügen
training.add_example("input1", "output1");
training.add_example("input2", "output2");
// Training starten
training.train(modelName);
}
let config = ONNXTrainingConfig {
epochs: 100,
batch_size: 32,
learning_rate: 0.001,
optimizer: "Adam",
loss_function: "CrossEntropy"
};
let result = training.train_with_onnx("my_model", config);
let config = TensorFlowTrainingConfig {
epochs: 100,
batch_size: 32,
learning_rate: 0.001,
optimizer: "Adam",
loss_function: "SparseCategoricalCrossentropy",
validation_split: 0.2
};
let result = training.train_with_tensorflow("tf_model", config);
let testData = [
TrainingExample { input: "test1", output: "expected1" }
];
let evalResult = training.evaluate_model("my_model", testData);
// evalResult.accuracy, evalResult.precision, evalResult.recall, evalResult.f1_score
Siehe auch: ML Training Tutorial für detaillierte Informationen.
- Model Caching für Performance
- Error Handling für API Calls
- Rate Limiting für LLM APIs
- Cost Management für externe APIs
- API Documentation - Vollständige API-Referenz
- Language Specification - Sprach-Spezifikation
- Compiler Architecture
- Pass-Verlauf
- Type Inference
- Code Ordering
- IR Representation
- Borrow Checker
- Code Generation
- Multi-Target Compilation
- Module Resolution
- Framework Integration
- Parallelization
- AI Compiler Passes
- Prompt Optimizer
- System Generation
- Basics
- APIs
- Security
- Database
- Validation
- Authentication
- ML/LLM
- Intelligence Features
- Type Inference
- ML Training
- Pattern Matching
- Closures
- Collections
- HTTP Client
- String Interpolation
- Debugger
- Vektor-Datenbanken
- CLI Reference
- API Keys Setup
- Advanced
- Backend
- Security Best Practices
- AI/ML
- Auto Imports
- Plugin Development