Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions src/content/docs/explanations/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,8 @@ Tenzir provides node-level TLS configuration that applies to all operators and
connectors using TLS/HTTPS connections. These settings are used by operators
that make outbound connections (e.g., <Op>to_opensearch</Op>,
<Op>to_splunk</Op>, <Op>save_email</Op>)
and those that accept inbound connections (e.g., <Op>load_tcp</Op>,
<Op>save_tcp</Op>).
and those that accept inbound connections (e.g., <Op>accept_tcp</Op>,
<Op>serve_tcp</Op>).

:::note[Use Only When Required]
We do not recommend manually configuring TLS settings unless required for
Expand Down Expand Up @@ -192,8 +192,9 @@ configuration:
- <Op>to_opensearch</Op>: Applies min version and ciphers to HTTPS connections
- <Op>to_splunk</Op>: Applies min version and ciphers to Splunk HEC connections
- <Op>save_email</Op>: Applies min version and ciphers to SMTP connections
- <Op>load_tcp</Op>: Applies min version and ciphers to TLS server mode
- <Op>save_tcp</Op>: Applies min version and ciphers to TLS client and server modes
- <Op>accept_tcp</Op>: Applies min version and ciphers to TLS server mode
- <Op>from_tcp</Op>: Applies min version and ciphers to TLS client mode
- <Op>serve_tcp</Op>: Applies min version and ciphers to TLS server mode
- <Op>from_opensearch</Op>: Applies min version and ciphers to HTTPS connections

## Plugins
Expand Down
25 changes: 12 additions & 13 deletions src/content/docs/guides/collecting/get-data-from-the-network.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,16 @@ capture raw packets from network interfaces.

## TCP sockets

The [Transmission Control Protocol (TCP)](/integrations/tcp) provides reliable,
ordered byte streams. Use TCP when you need guaranteed delivery and message
ordering.
<Integration>tcp</Integration> provides reliable, ordered byte streams. Use TCP
when you need guaranteed delivery and message ordering.

### Listen for connections

Start a TCP server that accepts incoming connections:
Use <Op>accept_tcp</Op> to start a TCP server that accepts incoming
connections:

```tql
from "tcp://0.0.0.0:9000" {
accept_tcp "0.0.0.0:9000" {
read_json
}
```
Expand All @@ -27,10 +27,10 @@ pipeline to convert incoming bytes to events.

### Connect to a remote server

Act as a TCP client by connecting to an existing server:
Use <Op>from_tcp</Op> to connect to an existing server:

```tql
from "tcp://192.168.1.100:9000", connect=true {
from_tcp "192.168.1.100:9000" {
read_json
}
```
Expand All @@ -40,7 +40,7 @@ from "tcp://192.168.1.100:9000", connect=true {
Secure your TCP connections with TLS by passing a `tls` record:

```tql
from "tcp://0.0.0.0:9443", tls={certfile: "cert.pem", keyfile: "key.pem"} {
accept_tcp "0.0.0.0:9443", tls={certfile: "cert.pem", keyfile: "key.pem"} {
read_json
}
```
Expand All @@ -56,9 +56,8 @@ For production TLS configuration, including mutual TLS and cipher settings, see

## UDP sockets

The [User Datagram Protocol (UDP)](/integrations/udp) is a connectionless
protocol ideal for high-volume, loss-tolerant data like syslog messages or
metrics.
<Integration>udp</Integration> is a connectionless protocol ideal for
high-volume, loss-tolerant data like syslog messages or metrics.

### Receive UDP datagrams

Expand Down Expand Up @@ -99,8 +98,8 @@ this = {

## Packet capture

Capture raw network packets from a [network interface card (NIC)](/integrations/nic)
for deep packet inspection or network forensics.
Capture raw network packets with <Integration>nic</Integration> for deep
packet inspection or network forensics.

### List available interfaces

Expand Down
4 changes: 2 additions & 2 deletions src/content/docs/guides/node-setup/configure-tls.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ tenzir:
```

These settings apply automatically to operators like <Op>from_http</Op>,
<Op>load_tcp</Op>, <Op>save_tcp</Op>,
<Op>to_opensearch</Op>, <Op>from_opensearch</Op>,
<Op>accept_tcp</Op>, <Op>from_tcp</Op>,
<Op>serve_tcp</Op>, <Op>to_opensearch</Op>, <Op>from_opensearch</Op>,
<Op>to_splunk</Op>, <Op>save_email</Op>,
and <Op>to_fluent_bit</Op>.

Expand Down
6 changes: 2 additions & 4 deletions src/content/docs/guides/routing/load-balance-pipelines.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@ nested pipelines, enabling you to spread load across multiple destinations.
let $endpoints = ["host1:8080", "host2:8080", "host3:8080"]
subscribe "events"
load_balance $endpoints {
write_json
save_tcp $endpoints
to_tcp $endpoints { write_json }
}
```

Expand All @@ -36,8 +35,7 @@ let $cfg = ["192.168.0.30:8080", "192.168.0.31:8080"]

subscribe "input"
load_balance $cfg {
write_json
save_tcp $cfg
to_tcp $cfg { write_json }
}
```

Expand Down
7 changes: 3 additions & 4 deletions src/content/docs/guides/routing/send-to-destinations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ to_opensearch "https://opensearch.example.com:9200",

### Cloud services

Route events to cloud destinations like [Amazon SQS](/integrations/amazon/sqs)
and [Google Cloud Pub/Sub](/integrations/google/cloud-pubsub).
Route events to cloud destinations like <Integration>amazon/sqs</Integration>
and <Integration>google/cloud-pubsub</Integration>.

Send to SQS:

Expand Down Expand Up @@ -105,8 +105,7 @@ save_file "s3://bucket/logs/events.jsonl"
Send NDJSON over <Integration>tcp</Integration>:

```tql
write_json
save_tcp "collector.example.com:5044"
to_tcp "collector.example.com:5044" { write_json }
```

## Expression-based serialization
Expand Down
36 changes: 13 additions & 23 deletions src/content/docs/integrations/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,43 +9,33 @@ packages at the top to native protocol connectors at the core.

## Packages

<Explanation>packages</Explanation> are 1-click deployable integrations that deliver instant value.
They bundle pipelines, [enrichment contexts](/explanations/enrichment/), and
configurations for common security tools like Splunk, CrowdStrike, Elastic,
SentinelOne, Palo Alto, and many more.
<Explanation>packages</Explanation> are 1-click deployable integrations that
deliver instant value. They bundle pipelines,
<Explanation>enrichment</Explanation>, and configurations for common security
tools like Splunk, CrowdStrike, Elastic, SentinelOne, Palo Alto, and many more.

Browse our freely available [package library on
GitHub](https://github.com/tenzir/library).
Browse our freely available [package library on GitHub](https://github.com/tenzir/library). You can also use <Guide>ai-workbench/use-agent-skills</Guide> to generate custom packages with AI assistance.

## Core Integrations

Core integrations are native connectors to the ecosystem, enabling
communication over numerous protocols and APIs:

- **Cloud storage**: <Integration>amazon/s3</Integration>,
[GCS](/integrations/google/cloud-storage),
<Integration>microsoft/azure-blob-storage</Integration>
- **Message queues**: <Integration>kafka</Integration>,
<Integration>amazon/sqs</Integration>, <Integration>amqp</Integration>
- **Databases**: <Integration>snowflake</Integration>,
<Integration>clickhouse</Integration>
- **Network protocols**: <Integration>tcp</Integration>, <Integration>udp</Integration>,
<Integration>http</Integration>, <Integration>syslog</Integration>
- **Cloud storage**: <Integration>amazon/s3</Integration>, <Integration>google/cloud-storage</Integration>, <Integration>microsoft/azure-blob-storage</Integration>
- **Message queues**: <Integration>kafka</Integration>, <Integration>amazon/sqs</Integration>, <Integration>amqp</Integration>
- **Databases**: <Integration>snowflake</Integration>, <Integration>clickhouse</Integration>, <Integration>mysql</Integration>
- **Network protocols**: <Integration>tcp</Integration>, <Integration>udp</Integration>, <Integration>http</Integration>, <Integration>syslog</Integration>

Under the hood, core integrations use a C++ plugin abstraction to provide an
[operator](/reference/operators/), [function](/reference/functions/), or
[context](/explanations/enrichment/) that you can use in TQL to directly
interface with the respective resource, such as a TCP socket or cloud storage
bucket. We typically implement this functionality using the respective SDK, such as the
[AWS SDK](https://aws.amazon.com/sdk-for-cpp/), [Google Cloud
bucket. We typically implement this functionality using the respective SDK, such
as the [AWS SDK](https://aws.amazon.com/sdk-for-cpp/), [Google Cloud
SDK](https://cloud.google.com/cpp), or
[librdkafka](https://github.com/confluentinc/librdkafka), though some
integrations require a custom implementation.

:::note[Dedicated Operators]
For some applications, we provide a **dedicated operator** that dramatically
simplifies the user experience. For example,
<Op>to_splunk</Op> and
<Op>from_opensearch</Op> offer a
streamlined interface compared to composing generic HTTP or protocol operators.
:::note[Dedicated operators]
For some applications, we provide a **dedicated operator** that dramatically simplifies the user experience. For example, <Op>to_splunk</Op> and <Op>from_opensearch</Op> offer a streamlined interface compared to composing generic HTTP or protocol operators.
:::
14 changes: 6 additions & 8 deletions src/content/docs/integrations/microsoft/windows-event-logs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -221,10 +221,8 @@ configuration:
Import the logs via TCP:

```tql
load_tcp "127.0.0.1:4000",
tls=true,
certfile="key_and_cert.pem",
keyfile="key_and_cert.pem" {
accept_tcp "127.0.0.1:4000",
tls={certfile: "key_and_cert.pem", keyfile: "key_and_cert.pem"} {
read_json
}
import
Expand All @@ -251,7 +249,7 @@ configuration to publish to the `nxlog` topic:
</Output>
```

Then use our [Kafka integration](/integrations/kafka) to read from the topic:
Then use <Integration>kafka</Integration> to read from the topic:

```tql
from_kafka "nxlog"
Expand Down Expand Up @@ -589,7 +587,7 @@ Accept the logs sent with the configuration above into Tenzir via
<Integration>tcp</Integration>:

```tql
load_tcp "10.0.0.1:1514" {
accept_tcp "10.0.0.1:1514" {
read_json
}
publish "wec"
Expand Down Expand Up @@ -618,7 +616,7 @@ Security monitoring often focuses on specific event types. Filter for logon
events (Event ID 4624) and failed logon attempts (Event ID 4625):

```tql
load_tcp "10.0.0.1:1514" {
accept_tcp "10.0.0.1:1514" {
read_delimited "</Event>\n", include_separator=true
}
this = data.parse_winlog()
Expand All @@ -631,7 +629,7 @@ The `EventData` section contains event-specific fields. For a successful logon
event, extract the relevant information:

```tql
load_tcp "10.0.0.1:1514" {
accept_tcp "10.0.0.1:1514" {
read_delimited "</Event>\n", include_separator=true
}
this = data.parse_winlog()
Expand Down
Loading
Loading