Skip to content

Storage Backends

Singh, Prabhu (He/Him/His) edited this page Apr 8, 2026 · 7 revisions

Storage Backends

Detailed setup guides for each storage backend.

Overview

Backend Type Use Case Managed Service Validated
File file Development, testing No
PostgreSQL postgresql General purpose RDS, Cloud SQL, Azure DB
MongoDB mongodb Document storage Atlas, DocumentDB
MySQL mysql Relational data RDS, Cloud SQL, Azure DB
Elasticsearch elasticsearch Full-text search Elastic Cloud, OpenSearch
InfluxDB influxdb Time-series data InfluxDB Cloud
Azure Log Analytics azure Azure ecosystem Yes (Azure)
AWS CloudWatch aws AWS ecosystem Yes (AWS) Pending
GCP Cloud Logging gcp GCP ecosystem Yes (GCP) Pending

Legend: ✅ = Tested and working on minikube | Pending = Not yet validated


File Storage

Best for development and testing.

storage:
  type: file
  outputDir: /logs

Pros:

  • Zero configuration
  • No external dependencies

Cons:

  • Logs lost on pod restart
  • No query capabilities
  • Single pod only

PostgreSQL

Reliable relational database storage.

Schema

Logsenta auto-creates this table:

CREATE TABLE IF NOT EXISTS error_logs (
    id SERIAL PRIMARY KEY,
    namespace VARCHAR(255) NOT NULL,
    pod_name VARCHAR(255) NOT NULL,
    container VARCHAR(255) NOT NULL,
    error_time TIMESTAMP NOT NULL,
    error_line TEXT,
    log_entries JSONB,
    metadata JSONB,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_error_logs_namespace ON error_logs(namespace);
CREATE INDEX idx_error_logs_error_time ON error_logs(error_time);

Installation

# Install PostgreSQL
helm install postgresql bitnami/postgresql -n database --create-namespace \
  --set auth.username=logsenta \
  --set auth.password=$DB_PASSWORD \
  --set auth.database=k8s_logs

# Install Logsenta
helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=postgresql \
  --set connections.postgresql.host=postgresql.database.svc.cluster.local \
  --set connections.postgresql.username=logsenta \
  --set connections.postgresql.password=$DB_PASSWORD

Query Examples

-- Recent errors
SELECT * FROM error_logs 
ORDER BY error_time DESC 
LIMIT 100;

-- Errors by namespace
SELECT namespace, COUNT(*) as error_count 
FROM error_logs 
GROUP BY namespace 
ORDER BY error_count DESC;

-- Errors in last hour
SELECT * FROM error_logs 
WHERE error_time > NOW() - INTERVAL '1 hour';

MongoDB

Document-based storage for flexible schemas.

Installation

# Install MongoDB
helm install mongodb bitnami/mongodb -n database --create-namespace \
  --set auth.rootPassword=$ROOT_PASSWORD \
  --set auth.username=logsenta \
  --set auth.password=$DB_PASSWORD \
  --set auth.database=k8s_logs

# Install Logsenta
helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=mongodb \
  --set connections.mongodb.host=mongodb.database.svc.cluster.local \
  --set connections.mongodb.username=logsenta \
  --set connections.mongodb.password=$DB_PASSWORD

Query Examples

// Recent errors
db.error_logs.find().sort({error_time: -1}).limit(100)

// Errors by namespace
db.error_logs.aggregate([
  { $group: { _id: "$namespace", count: { $sum: 1 } } },
  { $sort: { count: -1 } }
])

// Search error text
db.error_logs.find({ error_line: /OutOfMemory/i })

MySQL

Relational database storage with strong consistency.

Schema

Logsenta auto-creates this table:

CREATE TABLE IF NOT EXISTS error_logs (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    namespace VARCHAR(255) NOT NULL,
    pod_name VARCHAR(255) NOT NULL,
    container VARCHAR(255) NOT NULL,
    error_time DATETIME NOT NULL,
    error_line TEXT NOT NULL,
    log_entries JSON,
    metadata JSON,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_namespace (namespace),
    INDEX idx_pod_name (pod_name),
    INDEX idx_error_time (error_time),
    INDEX idx_namespace_pod_time (namespace, pod_name, error_time)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

Installation

# Install MySQL
helm install mysql bitnami/mysql -n database --create-namespace \
  --set auth.rootPassword=$ROOT_PASSWORD \
  --set auth.username=logsenta \
  --set auth.password=$DB_PASSWORD \
  --set auth.database=k8s_logs

# Install Logsenta
helm install logsenta-engine ./charts/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=mysql \
  --set connections.mysql.host=mysql.database.svc.cluster.local \
  --set connections.mysql.username=logsenta \
  --set connections.mysql.password=$DB_PASSWORD

Query Examples

-- Recent errors
SELECT * FROM error_logs 
ORDER BY error_time DESC 
LIMIT 100;

-- Errors by namespace
SELECT namespace, COUNT(*) as error_count 
FROM error_logs 
GROUP BY namespace 
ORDER BY error_count DESC;

-- Errors in last hour
SELECT * FROM error_logs 
WHERE error_time > NOW() - INTERVAL 1 HOUR;

-- Search error text
SELECT * FROM error_logs 
WHERE error_line LIKE '%OutOfMemory%';

Azure Log Analytics

Native Azure cloud logging.

Prerequisites

  1. Azure subscription
  2. Log Analytics Workspace
  3. Workspace ID and Shared Key

Get Credentials

# Using Azure CLI
az monitor log-analytics workspace show \
  --resource-group $RESOURCE_GROUP \
  --workspace-name $WORKSPACE_NAME \
  --query customerId -o tsv

az monitor log-analytics workspace get-shared-keys \
  --resource-group $RESOURCE_GROUP \
  --workspace-name $WORKSPACE_NAME

Installation

helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=azure \
  --set connections.azure.workspaceId=$WORKSPACE_ID \
  --set connections.azure.sharedKey=$SHARED_KEY \
  --set connections.azure.logType=LogsentaK8sLogs

Query Examples (KQL)

// All logs
LogsentaK8sLogs_CL
| take 100

// Errors by namespace
LogsentaK8sLogs_CL
| summarize count() by namespace_s
| order by count_ desc

// Search error text
LogsentaK8sLogs_CL
| where error_line_s contains "OutOfMemory"

// Time chart
LogsentaK8sLogs_CL
| summarize count() by bin(TimeGenerated, 1h)
| render timechart

AWS CloudWatch

Native AWS cloud logging.

Prerequisites

  1. AWS account
  2. IAM credentials or IRSA role
  3. CloudWatch Logs permissions

IAM Policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
      ],
      "Resource": "arn:aws:logs:*:*:log-group:/kubernetes/logsenta*"
    }
  ]
}

Installation (IAM Credentials)

helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=aws \
  --set connections.aws.region=us-east-1 \
  --set connections.aws.logGroup=/kubernetes/logsenta \
  --set connections.aws.accessKeyId=$AWS_ACCESS_KEY_ID \
  --set connections.aws.secretAccessKey=$AWS_SECRET_ACCESS_KEY

Installation (IRSA - Recommended)

helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=aws \
  --set connections.aws.region=us-east-1 \
  --set connections.aws.logGroup=/kubernetes/logsenta \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::ACCOUNT:role/logsenta-role

Query Examples (CloudWatch Insights)

# All logs
fields @timestamp, namespace, pod_name, error_line
| sort @timestamp desc
| limit 100

# Errors by namespace
fields namespace
| stats count() by namespace
| sort count desc

GCP Cloud Logging

Native Google Cloud logging.

Prerequisites

  1. GCP project
  2. Service Account or Workload Identity
  3. Logging Writer role

Installation (Workload Identity - Recommended)

# Create GCP service account
gcloud iam service-accounts create logsenta \
  --display-name="Logsenta Engine"

# Grant logging permissions
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member="serviceAccount:logsenta@$PROJECT_ID.iam.gserviceaccount.com" \
  --role="roles/logging.logWriter"

# Bind to Kubernetes SA
gcloud iam service-accounts add-iam-policy-binding \
  logsenta@$PROJECT_ID.iam.gserviceaccount.com \
  --member="serviceAccount:$PROJECT_ID.svc.id.goog[logsenta/logsenta-engine]" \
  --role="roles/iam.workloadIdentityUser"

# Install Logsenta
helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=gcp \
  --set connections.gcp.projectId=$PROJECT_ID \
  --set connections.gcp.logName=logsenta \
  --set serviceAccount.annotations."iam\.gke\.io/gcp-service-account"=logsenta@$PROJECT_ID.iam.gserviceaccount.com

Query Examples (Logs Explorer)

resource.type="k8s_container"
logName="projects/PROJECT_ID/logs/logsenta"

# Filter by namespace
jsonPayload.namespace="production"

# Search error text
jsonPayload.error_line:"OutOfMemory"

Elasticsearch

Full-text search and analytics.

Installation

# Install Elasticsearch
helm install elasticsearch elastic/elasticsearch -n logging --create-namespace

# Install Logsenta
helm install logsenta-engine logsenta/logsenta-engine \
  --namespace logsenta --create-namespace \
  --set storage.type=elasticsearch \
  --set "connections.elasticsearch.hosts={http://elasticsearch-master.logging:9200}" \
  --set connections.elasticsearch.index=k8s-error-logs

Query Examples

# Search all
curl -X GET "elasticsearch:9200/k8s-error-logs/_search?pretty"

# Search by namespace
curl -X GET "elasticsearch:9200/k8s-error-logs/_search?q=namespace:production"

Comparison Matrix

Feature PostgreSQL MongoDB Azure AWS GCP Elasticsearch
Setup Complexity Medium Medium Low Low Low High
Query Language SQL MQL KQL Insights Filter DSL
Full-text Search Basic Good Good Good Good Excellent
Scalability Good Good Excellent Excellent Excellent Excellent
Cost (Self-hosted) Low Low N/A N/A N/A Medium
Managed Service Yes Yes Yes Yes Yes Yes
Real-time Yes Yes 5-30 min delay Near real-time Near real-time Yes

Clone this wiki locally