diff --git a/src/current/_includes/v25.3/backups/locality-aware-multi-tenant.md b/src/current/_includes/v25.3/backups/locality-aware-multi-tenant.md
index 896d29db2d6..dfa68c855e8 100644
--- a/src/current/_includes/v25.3/backups/locality-aware-multi-tenant.md
+++ b/src/current/_includes/v25.3/backups/locality-aware-multi-tenant.md
@@ -1 +1 @@
-Both CockroachDB {{ site.data.products.standard }} and CockroachDB {{ site.data.products.basic }} clusters operate with a different architecture compared to CockroachDB {{ site.data.products.core }}. These architectural differences have implications for how locality-aware backups can run. {{ site.data.products.standard }} and {{ site.data.products.basic }} clusters will scale resources depending on whether they are actively in use. This makes it less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements.
+Both CockroachDB {{ site.data.products.standard }} and CockroachDB {{ site.data.products.basic }} clusters operate with a different architecture compared to CockroachDB {{ site.data.products.core }}. These architectural differences have implications for how locality-aware backups can run. {{ site.data.products.standard }} and {{ site.data.products.basic }} clusters will scale resources depending on whether they are actively in use. This makes it less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link {{ page.version.version }}/data-domiciling.md %}) requirements.
diff --git a/src/current/_includes/v25.3/backward-incompatible/alpha.1.md b/src/current/_includes/v25.3/backward-incompatible/alpha.1.md
index b8b1f37d137..8d4004422ad 100644
--- a/src/current/_includes/v25.3/backward-incompatible/alpha.1.md
+++ b/src/current/_includes/v25.3/backward-incompatible/alpha.1.md
@@ -1,15 +1,15 @@
- CockroachDB no longer performs environment variable expansion in the parameter `--certs-dir`. Uses like `--certs-dir='$HOME/path'` (expansion by CockroachDB) can be replaced by `--certs-dir="$HOME/path"` (expansion by the Unix shell). [#81298][#81298]
-- In the Cockroach CLI, [`BOOL` values](../v23.1/bool.html) are now formatted as `t` or `f` instead of `True` or `False`. [#81943][#81943]
-- Removed the `cockroach quit` command. It has been deprecated since v20.1. To [shut down a node](../v23.1/node-shutdown.html) gracefully, send a `SIGTERM` signal to it. [#82988][#82988]
-- Added a cluster version to allow the [Pebble storage engine](../v23.1/architecture/storage-layer.html#pebble) to recombine certain SSTables (specifically, user keys that are split across multiple files in a level of the [log-structured merge-tree](../v23.1/architecture/storage-layer.html#log-structured-merge-trees)). Recombining the split user keys is required for supporting the range keys feature. The migration to recombine the SSTables is expected to be short (split user keys are rare in practice), but will block subsequent migrations until all tables have been recombined. The `storage.marked-for-compaction-files` time series metric can show the progress of the migration. [#84887][#84887]
+- In the Cockroach CLI, [`BOOL` values]({% link {{ page.version.version }}/bool.md %}) are now formatted as `t` or `f` instead of `True` or `False`. [#81943][#81943]
+- Removed the `cockroach quit` command. It has been deprecated since v20.1. To [shut down a node]({% link {{ page.version.version }}/node-shutdown.md %}) gracefully, send a `SIGTERM` signal to it. [#82988][#82988]
+- Added a cluster version to allow the [Pebble storage engine]({% link {{ page.version.version }}/architecture/storage-layer.md#pebble %}) to recombine certain SSTables (specifically, user keys that are split across multiple files in a level of the [log-structured merge-tree]({% link {{ page.version.version }}/architecture/storage-layer.md#log-structured-merge-trees %})). Recombining the split user keys is required for supporting the range keys feature. The migration to recombine the SSTables is expected to be short (split user keys are rare in practice), but will block subsequent migrations until all tables have been recombined. The `storage.marked-for-compaction-files` time series metric can show the progress of the migration. [#84887][#84887]
- Using a single TCP port listener for both RPC (node-node) and SQL client connections is now deprecated. This capability **will be removed** in the next version of CockroachDB. Instead, make one of the following configuration changes to your CockroachDB deployment:
- Preferred: keep port `26257` for SQL, and allocate a new port, e.g., `26357`, for node-node RPC connections. For example, you might configure a node with the flags `--listen-addr=:26357 --sql-addr=:26257`, where subsequent nodes seeking to join would then use the flag `--join=othernode:26357,othernode:26257`. This will become the default configuration in the next version of CockroachDB. When using this mode of operation, care should be taken to use a `--join` flag that includes both the previous and new port numbers for other nodes, so that no network partition occurs during the upgrade.
- Optional: keep port `26257` for RPC, and allocate a new port, e.g., `26357`, for SQL connections. For example, you might configure a node with the flags `--listen-addr=:26257 --sql-addr=:26357`. When using this mode of operation, the `--join` flags do not need to be modified. However, SQL client apps or the SQL load balancer configuration (when in use) must be updated to use the new SQL port number. [#85671][#85671]
-- If no `nullif` option is specified while using [`IMPORT CSV`](../v23.1/import.html), then a zero-length string in the input is now treated as `NULL`. The quoted empty string in the input is treated as an empty string. Similarly, if `nullif` is specified, then an unquoted value is treated as `NULL`, and a quoted value is treated as that string. These changes were made to make `IMPORT CSV` behave more similarly to `COPY CSV`. If the previous behavior (i.e., treating either quoted or unquoted values that match the `nullif` setting as `NULL`) is desired, you can use the new `allow_quoted_null` option in the `IMPORT` statement. [#84487][#84487]
-- [`COPY FROM`](../v23.1/copy.html) operations are now atomic by default instead of being segmented into 100 row transactions. Set the `copy_from_atomic_enabled` session setting to `false` for the previous behavior. [#85986][#85986]
-- The `GRANT` privilege has been removed and replaced by the more granular [`WITH GRANT OPTION`]({% link v25.3/grant.md %}#grant-privileges-with-the-option-to-grant-to-others), which provides control over which privileges are allowed to be granted. [#81310][#81310]
+- If no `nullif` option is specified while using [`IMPORT CSV`]({% link {{ page.version.version }}/import.md %}), then a zero-length string in the input is now treated as `NULL`. The quoted empty string in the input is treated as an empty string. Similarly, if `nullif` is specified, then an unquoted value is treated as `NULL`, and a quoted value is treated as that string. These changes were made to make `IMPORT CSV` behave more similarly to `COPY CSV`. If the previous behavior (i.e., treating either quoted or unquoted values that match the `nullif` setting as `NULL`) is desired, you can use the new `allow_quoted_null` option in the `IMPORT` statement. [#84487][#84487]
+- [`COPY FROM`]({% link {{ page.version.version }}/copy.md %}) operations are now atomic by default instead of being segmented into 100 row transactions. Set the `copy_from_atomic_enabled` session setting to `false` for the previous behavior. [#85986][#85986]
+- The `GRANT` privilege has been removed and replaced by the more granular [`WITH GRANT OPTION`]({% link {{ page.version.version }}/grant.md %}#grant-privileges-with-the-option-to-grant-to-others), which provides control over which privileges are allowed to be granted. [#81310][#81310]
- Removed the ability to cast `int`, `int2`, and `int8` to a `0` length `BIT` or `VARBIT`. [#81266][#81266]
- Removed the deprecated `GRANT` privilege. [#81310][#81310]
- Removed the `ttl_automatic_column` storage parameter. The `crdb_internal_expiration` column is created when `ttl_expire_after` is set and removed when `ttl_expire_after` is reset. [#83134][#83134]
- Removed the byte string parameter in the `crdb_internal.schedule_sql_stats_compaction` function. [#82560][#82560]
-- Changed the default value of the `enable_implicit_transaction_for_batch_statements` to `true`. This means that a [batch of statements]({% link v25.3/transactions.md %}#batched-statements) sent in one string separated by semicolons is treated as an implicit transaction. [#76834][#76834]
+- Changed the default value of the `enable_implicit_transaction_for_batch_statements` to `true`. This means that a [batch of statements]({% link {{ page.version.version }}/transactions.md %}#batched-statements) sent in one string separated by semicolons is treated as an implicit transaction. [#76834][#76834]
diff --git a/src/current/_includes/v25.3/misc/movr-schema.md b/src/current/_includes/v25.3/misc/movr-schema.md
index e838bcf4572..1c51e944d74 100644
--- a/src/current/_includes/v25.3/misc/movr-schema.md
+++ b/src/current/_includes/v25.3/misc/movr-schema.md
@@ -9,4 +9,4 @@ Table | Description
`user_promo_codes` | Promotional codes in use by users.
`vehicle_location_histories` | Vehicle location history.
-
+
diff --git a/src/current/_includes/v25.3/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v25.3/performance/check-rebalancing-after-partitioning.md
index c7e19142bd4..d46472fdfd4 100644
--- a/src/current/_includes/v25.3/performance/check-rebalancing-after-partitioning.md
+++ b/src/current/_includes/v25.3/performance/check-rebalancing-after-partitioning.md
@@ -2,7 +2,7 @@ Over the next minutes, CockroachDB will rebalance all partitions based on the co
To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning:
-
+
To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table:
diff --git a/src/current/_includes/v25.3/performance/check-rebalancing.md b/src/current/_includes/v25.3/performance/check-rebalancing.md
index 32e3d98f8f1..3c63a0a2f0e 100644
--- a/src/current/_includes/v25.3/performance/check-rebalancing.md
+++ b/src/current/_includes/v25.3/performance/check-rebalancing.md
@@ -2,7 +2,7 @@ Since you started each node with the `--locality` flag set to its GCE zone, over
To check this, access the DB Console on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes:
-
+
For reference, here's how the nodes map to zones:
diff --git a/src/current/_includes/v25.3/sql/macos-terminal-configuration.md b/src/current/_includes/v25.3/sql/macos-terminal-configuration.md
index 5b636259ce1..85d5461f8f1 100644
--- a/src/current/_includes/v25.3/sql/macos-terminal-configuration.md
+++ b/src/current/_includes/v25.3/sql/macos-terminal-configuration.md
@@ -3,12 +3,12 @@ In **Apple Terminal**:
1. Navigate to "Preferences", then "Profiles", then "Keyboard".
1. Enable the checkbox "Use Option as Meta Key".
-
+
In **iTerm2**:
1. Navigate to "Preferences", then "Profiles", then "Keys".
1. Select the radio button "Esc+" for the behavior of the Left Option Key.
-
+
diff --git a/src/current/_includes/v25.3/start-in-docker/mac-linux-steps.md b/src/current/_includes/v25.3/start-in-docker/mac-linux-steps.md
index 9de4428329d..d80728b8a62 100644
--- a/src/current/_includes/v25.3/start-in-docker/mac-linux-steps.md
+++ b/src/current/_includes/v25.3/start-in-docker/mac-linux-steps.md
@@ -276,7 +276,7 @@ The [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) gives you
1. On the [**Cluster Overview**]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}), notice that three nodes are live, with an identical replica count on each node:
-
+
This demonstrates CockroachDB's [automated replication]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) of data via the Raft consensus protocol.
@@ -286,7 +286,7 @@ The [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) gives you
1. Click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:
-
+
1. Use the [**Databases**]({% link {{ page.version.version }}/ui-databases-page.md %}), [**Statements**]({% link {{ page.version.version }}/ui-statements-page.md %}), and [**Jobs**]({% link {{ page.version.version }}/ui-jobs-page.md %}) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.
1. Optionally verify that DB Console instances for `roach2` and `roach3` are reachable on ports 8081 and 8082 and show the same information as port 8080.
diff --git a/src/current/_includes/v25.3/topology-patterns/multi-region-cluster-setup.md b/src/current/_includes/v25.3/topology-patterns/multi-region-cluster-setup.md
index cf8bf46a9a1..88438903d0a 100644
--- a/src/current/_includes/v25.3/topology-patterns/multi-region-cluster-setup.md
+++ b/src/current/_includes/v25.3/topology-patterns/multi-region-cluster-setup.md
@@ -1,6 +1,6 @@
Each [multi-region pattern]({% link {{ page.version.version }}/topology-patterns.md %}#multi-region) assumes the following setup:
-
+
#### Hardware
diff --git a/src/current/_includes/v25.3/ui/active-statement-executions.md b/src/current/_includes/v25.3/ui/active-statement-executions.md
index e2254c2ac02..d2bc1ace62a 100644
--- a/src/current/_includes/v25.3/ui/active-statement-executions.md
+++ b/src/current/_includes/v25.3/ui/active-statement-executions.md
@@ -30,7 +30,7 @@ The statement execution details page provides the following details on the state
If a statement execution is waiting, the statement execution details are followed by Contention Insights and details of the statement execution on which the blocked statement execution is waiting. For more information about contention, see [Understanding and avoiding transaction contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention).
-
+
## See also
diff --git a/src/current/_includes/v25.3/ui/active-transaction-executions.md b/src/current/_includes/v25.3/ui/active-transaction-executions.md
index ee2ee069ad1..62a4c44ddf0 100644
--- a/src/current/_includes/v25.3/ui/active-transaction-executions.md
+++ b/src/current/_includes/v25.3/ui/active-transaction-executions.md
@@ -35,7 +35,7 @@ The transaction execution details page provides the following details on the tra
If a transaction execution is waiting, the transaction execution details are followed by Contention Insights and details of the transaction execution on which the blocked transaction execution is waiting. For more information about contention, see [Transaction contention]({{ link_prefix }}performance-best-practices-overview.html#transaction-contention).
-
+
## See also
diff --git a/src/current/_includes/v25.3/ui/insights.md b/src/current/_includes/v25.3/ui/insights.md
index c18ca8897a7..e2eb9ca7379 100644
--- a/src/current/_includes/v25.3/ui/insights.md
+++ b/src/current/_includes/v25.3/ui/insights.md
@@ -164,7 +164,7 @@ To test this functionality, you can generate a SQL query with a [Slow Execution]
~~~
1. On the Insights page, in the **Columns** selector, check **Query Tags** and click **Apply**.
1. For the row where **Statement Execution** is `SELECT pg_sleep()`, scroll to the right to see the key-value pairs from the SQL comment displayed in the **Query Tags** column.
-
+
1. On the same row, click on the **Latest Statement Execution ID** (the first column on the left) to open the [**Statement Execution** details](#statement-execution-details) page. These key-value pairs also appear on the **Overview** tab under **Query Tags**.
### Statement Execution details
@@ -222,11 +222,11 @@ The Workload Insights tab surfaces the following type of insights:
The transaction or statement execution failed. The following screenshot shows a failed transaction execution:
-
+
The following screenshot shows the default details of the preceding failed transaction execution.
-
+
The **Insights** column shows the name of the insight, in this case **Failed Execution**. The **Details** column provides the **Error Code** and **Error Message**. CockroachDB uses [PostgreSQL Error Codes](https://www.postgresql.org/docs/current/errcodes-appendix.html). In this example, Error Code `40001` is a `serialization_failure`.
@@ -234,7 +234,7 @@ The **Insights** column shows the name of the insight, in this case **Failed Exe
The following screenshot shows the conditional details of the preceding failed transaction execution. In this case, there was a *serialization conflict*, also known as an *isolation conflict*, due to [transaction contention]({{ link_prefix }}performance-recipes.html#transaction-contention). (For transaction contention that causes *lock contention*, see [High Contention](#high-contention)).
-
+
To capture more information in the event of a failed transaction execution due to a serialization conflict, set the [`sql.contention.record_serialization_conflicts.enabled`]({{ link_prefix }}cluster-settings.html#setting-sql-contention-record-serialization-conflicts-enabled) cluster setting to `true` (default). With this setting enabled, when the **Error Code** is `40001` and the **Error Message** specifically has [`RETRY_SERIALIZABLE - failed preemptive refresh`]({{ link_prefix }}transaction-retry-error-reference.html#failed_preemptive_refresh)` due to conflicting locks`, a conditional **Failed Execution** section is displayed with **Conflicting Transaction** and **Conflicting Location** information.
@@ -248,11 +248,11 @@ To troubleshoot, refer to the performance tuning recipe for [identifying and unb
The following screenshot shows the execution of a transaction flagged with **High Contention**:
-
+
The following screenshot shows the execution details of the preceding transaction execution:
-
+
### High Retry Count
@@ -273,11 +273,11 @@ The statement execution has resulted in one or more [index recommendations](#sch
The following screenshot shows the statement execution of the query described in [Use the right index]({{ link_prefix }}apply-statement-performance-rules.html#rule-2-use-the-right-index):
-
+
The following screenshot shows the execution details of the preceding statement execution:
-
+
The **Insights** column shows the name of the insight, in this case **Suboptimal Plan**. The **Details** column provides details on the insight, such as a **Description** with the cause of the suboptimal plan and a **Recommendation** with a `CREATE INDEX` statement. The final column contains a **Create Index** button. Click the **Create Index** button to execute the recommended statement to mitigate the cause of the insight.
@@ -299,7 +299,7 @@ This view lists the [indexes]({{ link_prefix }}indexes.html) that have not been
The following screenshot shows the insight that displays after you run the query described in [Use the right index]({{ link_prefix }}apply-statement-performance-rules.html#rule-2-use-the-right-index) six or more times:
-
+
CockroachDB uses the threshold of six executions before offering an insight because it assumes that you are no longer merely experimenting with a query at that point.
diff --git a/src/current/_includes/v25.3/ui/jobs.md b/src/current/_includes/v25.3/ui/jobs.md
index c67c55d9a0c..b7edf59e935 100644
--- a/src/current/_includes/v25.3/ui/jobs.md
+++ b/src/current/_includes/v25.3/ui/jobs.md
@@ -8,7 +8,7 @@ The Jobs list is designed for you to manage pending work. It is not intended to
Use the **Jobs** table to see recently created and completed jobs.
-
+
### Filter jobs
@@ -66,7 +66,7 @@ The details show:
- **User Name**
- error messages (if any)
-
+
## See also
diff --git a/src/current/_includes/v25.3/ui/refresh.md b/src/current/_includes/v25.3/ui/refresh.md
index d34c28da080..9f1f152b630 100644
--- a/src/current/_includes/v25.3/ui/refresh.md
+++ b/src/current/_includes/v25.3/ui/refresh.md
@@ -6,7 +6,7 @@ To control refresh of the data on the Active Executions views of the SQL Activit
- A manual **Refresh** button: When clicked, refreshes data immediately.
- An **Auto Refresh** toggle: When toggled **On** (default), refreshes data immediately and then automatically every 10 seconds. When toggled **Off**, stops automatic data refresh. The toggle setting is shared by both the Statements and the Transactions pages. Changing the setting on one page changes it on the other page.
-
+
If **Auto Refresh** is toggled **On**, navigating to the Active Executions view on either the Statements page or Transactions page refreshes the data.
diff --git a/src/current/_includes/v25.3/ui/sessions.md b/src/current/_includes/v25.3/ui/sessions.md
index 8803337459b..6194f3ae6c5 100644
--- a/src/current/_includes/v25.3/ui/sessions.md
+++ b/src/current/_includes/v25.3/ui/sessions.md
@@ -1,10 +1,10 @@
{% if page.cloud != true %}
-
+
{% endif %}
To filter the sessions, click the **Filters** field.
-
+
- To filter by [application]({{ link_prefix }}connection-parameters.html#additional-connection-parameters), select **Application Name** and choose one or more applications.
@@ -40,7 +40,7 @@ To view details of a session, click a **Session Start Time (UTC)** to display se
If a session is idle, the **Transaction** and **Most Recent Statement** panels will display **No Active [Transaction | Statement]**.
{% if page.cloud != true %}
-
+
{% endif %}
The **Cancel statement** button ends the SQL statement. The session running this statement will receive an error.
diff --git a/src/current/_includes/v25.3/ui/statement-details.md b/src/current/_includes/v25.3/ui/statement-details.md
index 4c846eed9fa..66008abe036 100644
--- a/src/current/_includes/v25.3/ui/statement-details.md
+++ b/src/current/_includes/v25.3/ui/statement-details.md
@@ -35,7 +35,7 @@ The **Overview** section also displays the SQL statement fingerprint statistics
The following screenshot shows the statement fingerprint of the query described in [Use the right index]({{ link_prefix }}apply-statement-performance-rules.html#rule-2-use-the-right-index):
-
+
#### Insights
@@ -47,7 +47,7 @@ The **Insights** table is displayed when CockroachDB has detected a problem with
The following screenshot shows the insights of the statement fingerprint illustrated in [Overview](#overview):
-
+
#### Charts
@@ -65,7 +65,7 @@ Charts following the execution attributes display statement fingerprint statisti
The following charts summarize the executions of the statement fingerprint illustrated in [Overview](#overview):
-
+
### Explain Plans
@@ -73,7 +73,7 @@ The **Explain Plans** tab displays statement plans for an [explainable statement
The following screenshot shows an execution of the query discussed in [Overview](#overview):
-
+
The plan table shows the following details:
@@ -94,7 +94,7 @@ Vectorized | Whether the execution used the [vectorized execution engine]({{ lin
To display the plan that was executed, click the plan gist. For the plan gist `AgHUAQIABQAAAAHYAQIAiA...`, the following plan displays:
-
+
#### Insights
@@ -102,7 +102,7 @@ The plan table displays the number of insights related to the plan. If a plan ha
The following screenshot shows 1 insight found after running the query discussed in [Overview](#overview) 6 or more times:
-
+
{{site.data.alerts.callout_info}}
CockroachDB uses the threshold of 6 executions before offering an insight because it assumes that you are no longer merely experimenting with a query at that point.
@@ -114,7 +114,7 @@ If you click **Create Index**, a confirmation dialog displays a warning about th
If you click **Apply** to create the index and then execute the statement again, the **Explain Plans** tab will show that the second execution (in this case at `19:40`), which uses the index and has no insight, takes less time than the first 6 executions.
-
+
### Diagnostics
@@ -134,7 +134,7 @@ Diagnostics will be collected a maximum of *N* times for a given activated finge
#### Activate diagnostics collection and download bundles
-
+
{{site.data.alerts.callout_danger}}
Collecting diagnostics has an impact on performance. All executions of the statement fingerprint will run slower until diagnostics are collected.
@@ -156,11 +156,11 @@ To activate diagnostics collection:
When the statement fingerprint is executed according to the statement diagnostic options selected, a row with the activation time and collection status is added to the **Statement diagnostics** table.
-
+
The collection status values are:
-- **READY**: indicates that the diagnostics have been collected. To download the diagnostics bundle, click
**Bundle (.zip)**.
+- **READY**: indicates that the diagnostics have been collected. To download the diagnostics bundle, click
**Bundle (.zip)**.
- **WAITING**: indicates that a SQL statement matching the fingerprint has not yet been recorded. To cancel diagnostics collection, click the **Cancel request** button.
- **ERROR**: indicates that the attempt at diagnostics collection failed.
@@ -173,4 +173,4 @@ Although fingerprints are periodically cleared from the Statements page, all dia
- Click **Advanced Debug** in the left-hand navigation and click [Statement Diagnostics History]({% link {{ page.version.version }}/ui-debug-pages.md %}#reports).
{% endif %}
-Click
**Bundle (.zip)** to download any diagnostics bundle.
+Click
**Bundle (.zip)** to download any diagnostics bundle.
diff --git a/src/current/_includes/v25.3/ui/statements-filter.md b/src/current/_includes/v25.3/ui/statements-filter.md
index 34305a565e4..0daac06c1f8 100644
--- a/src/current/_includes/v25.3/ui/statements-filter.md
+++ b/src/current/_includes/v25.3/ui/statements-filter.md
@@ -72,4 +72,4 @@ To filter the statements:
The following screenshot shows the statements that contain the string `rides` for the `movr` application filtered by `Statement Type: DML`:
-
+
diff --git a/src/current/_includes/v25.3/ui/statements-views.md b/src/current/_includes/v25.3/ui/statements-views.md
index 607e05528f2..4d5fdd5ff30 100644
--- a/src/current/_includes/v25.3/ui/statements-views.md
+++ b/src/current/_includes/v25.3/ui/statements-views.md
@@ -41,11 +41,11 @@ The **Statements** tab is selected. The **Statement Fingerprints** radio button
The following screenshot shows the statement fingerprint for `SELECT city, id FROM vehicles WHERE city = $1` while running the [`movr` workload]({{ link_prefix}}cockroach-workload.html#run-the-movr-workload):
-
+
If you click the statement fingerprint in the **Statements** column, the [**Statement Fingerprint** page](#statement-fingerprint-page) displays.
-
+
## Active Executions view
@@ -66,11 +66,11 @@ When Auto [Refresh](#refresh) is On, active executions are polled every 10 secon
The following screenshot shows the active statement execution for `INSERT INTO users VALUES ($1, $2, $3, $4, $5)` while running the [`movr` workload]({{ link_prefix }}cockroach-workload.html#run-the-movr-workload):
-
+
If you click the execution ID in the **Statement Execution ID** column, the [**Statement Execution** details page](#statement-execution-details-page) displays.
-
+
{% if page.cloud != true %}
{% include {{ page.version.version }}/ui/refresh.md %}
diff --git a/src/current/_includes/v25.3/ui/transactions-filter.md b/src/current/_includes/v25.3/ui/transactions-filter.md
index 57330f05477..e9225d4e9e1 100644
--- a/src/current/_includes/v25.3/ui/transactions-filter.md
+++ b/src/current/_includes/v25.3/ui/transactions-filter.md
@@ -69,4 +69,4 @@ To filter the transactions:
The following screenshot shows the transactions that contain the string `rides` for the `movr` application filtered by `Runs Longer Than: 300 milliseconds`:
-
+
diff --git a/src/current/_includes/v25.3/ui/transactions-views.md b/src/current/_includes/v25.3/ui/transactions-views.md
index 37774b75fb6..96face0c911 100644
--- a/src/current/_includes/v25.3/ui/transactions-views.md
+++ b/src/current/_includes/v25.3/ui/transactions-views.md
@@ -41,11 +41,11 @@ Click the **Transactions** tab. The **Transaction Fingerprints** radio button is
The following screenshot shows the transaction fingerprint for `SELECT city, id FROM vehicles WHERE city = $1` while running the [`movr` workload]({{ link_prefix }}cockroach-workload.html#run-the-movr-workload):
-
+
If you click the transaction fingerprint in the **Transactions** column, the [**Transaction Details** page](#transaction-details-page) displays.
-
+
## Active Executions view
@@ -66,11 +66,11 @@ When Auto [Refresh](#refresh) is On, active executions are polled every 10 secon
The following screenshot shows the active statement execution for `UPSERT INTO vehicle_location_histories VALUES ($1, $2, now(), $4, $5)` while running the [`movr` workload]({{ link_prefix }}cockroach-workload.html#run-the-movr-workload):
-
+
If you click the execution ID in the **Transaction Execution ID** column, the [**Transaction Execution** details page](#transaction-execution-details-page) displays.
-
+
{% if page.cloud != true %}
{% include {{ page.version.version }}/ui/refresh.md %}
diff --git a/src/current/_includes/v25.3/ui/ui-summary-events.md b/src/current/_includes/v25.3/ui/ui-summary-events.md
index f074d5584d6..abe374c88ab 100644
--- a/src/current/_includes/v25.3/ui/ui-summary-events.md
+++ b/src/current/_includes/v25.3/ui/ui-summary-events.md
@@ -20,7 +20,7 @@ P99 Latency | The 99th percentile of service latency.
Underneath the [Summary](#summary-panel) panel, the **Events** panel lists the 5 most recent events logged for all nodes across the cluster. To list all events, click **View all events**.
-
+
The following types of events are listed:
diff --git a/src/current/_includes/v25.3/upgrade-requirements.md b/src/current/_includes/v25.3/upgrade-requirements.md
index 2d28309a2be..048f5f257fb 100644
--- a/src/current/_includes/v25.3/upgrade-requirements.md
+++ b/src/current/_includes/v25.3/upgrade-requirements.md
@@ -1,3 +1,3 @@
CockroachDB v25.1 is an Innovation release. To upgrade to it, you must be running v24.3, the previous Regular release.
-Before continuing, [upgrade to v24.3]({% link v24.3/upgrade-cockroach-version.md %}).
+Before continuing, [upgrade to v24.3]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}).
diff --git a/src/current/v25.3/architecture/reads-and-writes-overview.md b/src/current/v25.3/architecture/reads-and-writes-overview.md
index e1c0d5ee0fa..27a1bbd1c97 100644
--- a/src/current/v25.3/architecture/reads-and-writes-overview.md
+++ b/src/current/v25.3/architecture/reads-and-writes-overview.md
@@ -30,7 +30,7 @@ First, imagine a simple read scenario where:
- Ranges are replicated 3 times (the default).
- A query is executed against node 2 to read from table 3.
-
+
In this case:
@@ -41,13 +41,13 @@ In this case:
If the query is received by the node that has the leaseholder for the relevant range, there are fewer network hops:
-
+
## Write scenario
Now imagine a simple write scenario where a query is executed against node 3 to write to table 1:
-
+
In this case:
@@ -60,7 +60,7 @@ In this case:
Just as in the read scenario, if the write request is received by the node that has the leaseholder and Raft leader for the relevant range, there are fewer network hops:
-
+
## Network and I/O bottlenecks
diff --git a/src/current/v25.3/architecture/storage-layer.md b/src/current/v25.3/architecture/storage-layer.md
index 69c1253ea3f..8744e1dc478 100644
--- a/src/current/v25.3/architecture/storage-layer.md
+++ b/src/current/v25.3/architecture/storage-layer.md
@@ -63,7 +63,7 @@ Pebble uses a Log-structured Merge-tree (hereafter _LSM tree_ or _LSM_) to manag
SSTs are an on-disk representation of sorted lists of key-value pairs. Conceptually, they look something like this (intentionally simplified) diagram:
-
+
SST files are immutable; they are never modified, even during the [compaction process](#compaction).
@@ -78,7 +78,7 @@ The SSTs within each level are guaranteed to be non-overlapping: for example, if
- To allow LSM-based storage engines like Pebble to support ingesting large amounts of data, such as when using the [`IMPORT INTO`]({% link {{ page.version.version }}/import-into.md %}) statement.
- To allow for easier and more efficient flushes of [memtables](#memtable-and-write-ahead-log).
-
+
##### Write amplification
@@ -155,7 +155,7 @@ Another file on disk called the write-ahead log (hereafter _WAL_) is associated
The relationship between the memtable, the WAL, and the SST files is shown in the diagram below. New values are written to the WAL at the same time as they are written to the memtable. From the memtable they are eventually written to SST files on disk for longer-term storage.
-
+
##### LSM design tradeoffs
diff --git a/src/current/v25.3/aws-dms.md b/src/current/v25.3/aws-dms.md
index 99a42ac0532..df3317f81e3 100644
--- a/src/current/v25.3/aws-dms.md
+++ b/src/current/v25.3/aws-dms.md
@@ -93,7 +93,7 @@ As of publishing, AWS DMS supports migrations from these relational databases (f
1. In the AWS Console, open **AWS DMS**.
1. Open **Endpoints** in the sidebar. A list of endpoints will display, if any exist.
1. In the top-right portion of the window, select **Create endpoint**.
-
+
A configuration page will open.
1. In the **Endpoint type** section, select **Target endpoint**.
@@ -107,10 +107,10 @@ As of publishing, AWS DMS supports migrations from these relational databases (f
{{site.data.alerts.callout_info}}
To connect to a CockroachDB {{ site.data.products.standard }} or {{ site.data.products.basic }} cluster, set the **Database name** to `{host}.{database}`. For details on how to find these parameters, see [Connect to your cluster]({% link cockroachcloud/connect-to-your-cluster.md %}?filters=connection-parameters#connect-to-your-cluster). Also set **Secure Socket Layer (SSL) mode** to **require**.
{{site.data.alerts.end}}
-
+
1. If needed, you can test the connection under **Test endpoint connection (optional)**.
1. To create the endpoint, select **Create endpoint**.
-
+
## Step 2. Create a database migration task
@@ -124,7 +124,7 @@ To conserve CPU, consider migrating tables in multiple replication tasks, rather
1. While in **AWS DMS**, select **Database migration tasks** in the sidebar. A list of database migration tasks will display, if any exist.
1. In the top-right portion of the window, select **Create task**.
-
+
A configuration page will open.
1. Supply a **Task identifier** to identify the replication task.
@@ -135,17 +135,17 @@ To conserve CPU, consider migrating tables in multiple replication tasks, rather
{{site.data.alerts.callout_danger}}
If you choose **Migrate existing data and replicate ongoing changes** or **Replicate data changes only**, you must first [disable revision history for backups](#setup).
{{site.data.alerts.end}}
-
+
### Step 2.2. Task settings
1. For the **Editing mode** radio button, keep **Wizard** selected.
1. To preserve the schema you manually created, select **Truncate** or **Do nothing** for the **Target table preparation mode**.
-
+
1. Optionally check **Enable validation** to compare the data in the source and target rows, and verify that the migration succeeded. You can view the results in the [**Table statistics**](#step-3-verify-the-migration) for your migration task. For more information about data validation, see the [AWS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html).
1. Check the **Enable CloudWatch logs** option. We highly recommend this for troubleshooting potential migration issues.
1. For the **Target Load**, select **Detailed debug**.
-
+
### Step 2.3. Table mappings
@@ -161,7 +161,7 @@ When specifying a range of tables to migrate, the following aspects of the sourc
1. Select **Add new selection rule**.
1. In the **Schema** dropdown, select **Enter a schema**.
1. Supply the appropriate **Source name** (schema name), **Table name**, and **Action**.
-
+
{{site.data.alerts.callout_info}}
Use `%` as an example of a wildcard for all schemas in a PostgreSQL database. However, in MySQL, using `%` as a schema name imports all the databases, including the metadata/system ones, as MySQL treats schemas and databases as the same.
@@ -205,7 +205,7 @@ If your migration succeeded, you should now:
If your migration failed for some reason, you can check the checkbox next to the table(s) you wish to re-migrate and select **Reload table data**.
-
+
## Optional configurations
@@ -221,7 +221,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
1. Choose your task, and then choose **Modify**.
1. From the **Task settings** section, switch the **Editing mode** from **Wizard** to **JSON editor**. Locate the `BatchApplyEnabled` setting and change its value to `true`. Information about the `BatchApplyEnabled` setting can be found [here](https://aws.amazon.com/premiumsupport/knowledge-center/dms-batch-apply-cdc-replication/).
-
+
{{site.data.alerts.callout_info}}
`BatchApplyEnabled` does not work when using **Drop tables on target** as a target table preparation mode. Thus, all schema-related changes must be manually copied over if using `BatchApplyEnabled`.
@@ -277,7 +277,7 @@ The `BatchApplyEnabled` setting can improve replication performance and is recom
To prevent this error, set the [`expect_and_ignore_not_visible_columns_in_copy` session variable]({% link {{ page.version.version }}/session-variables.md %}#expect-and-ignore-not-visible-columns-in-copy) in the DMS [target endpoint configuration](#step-1-create-a-target-endpoint-pointing-to-cockroachdb). Under **Endpoint settings**, add an **AfterConnectScript** setting with the value `SET expect_and_ignore_not_visible_columns_in_copy=on`.
-
+
- The following error in the CockroachDB [logs]({% link {{ page.version.version }}/logging-overview.md %}) indicates that AWS DMS is unable to copy into a table with a [computed column]({% link {{ page.version.version }}/computed-columns.md %}):
diff --git a/src/current/v25.3/backup-architecture.md b/src/current/v25.3/backup-architecture.md
index 5a653eb6470..b0fabd82c3f 100644
--- a/src/current/v25.3/backup-architecture.md
+++ b/src/current/v25.3/backup-architecture.md
@@ -27,7 +27,7 @@ At a high level, CockroachDB performs the following tasks when running a backup
The following diagram illustrates the flow from `BACKUP` statement through to a complete backup in cloud storage:
-
+
## Job creation phase
@@ -85,7 +85,7 @@ In the following diagram, nodes that contain replicas of the relevant key spans
While processing, the nodes emit progress data that tracks their backup work to the coordinator. In the diagram, **Node 3** and **Node 1** will send progress data to **Node 2**. The coordinator node will then aggregate the progress data into checkpoint files in the storage bucket. The checkpoint files provide a marker for the backup to resume after a retryable state, such as when it has been [paused]({% link {{ page.version.version }}/pause-job.md %}).
-
+
## Metadata writing phase
@@ -112,7 +112,7 @@ For example, in the following diagram there is a three-node cluster split across
During a [restore]({% link {{ page.version.version }}/restore.md %}) job, the job creation statement will need access to each of the storage locations to read the metadata files in order to complete a successful restore.
-
+
#### Job coordination on CockroachDB Standard and Basic clusters
diff --git a/src/current/v25.3/changefeed-messages.md b/src/current/v25.3/changefeed-messages.md
index 758d8b6c873..2170fc5bbc7 100644
--- a/src/current/v25.3/changefeed-messages.md
+++ b/src/current/v25.3/changefeed-messages.md
@@ -237,7 +237,7 @@ A changefeed job cannot confirm that a message has been received by the sink unl
When a changefeed must pause and then resume, it will return to the last checkpoint (**A**), which is the last point at which the coordinator confirmed all changes for the given timestamp. As a result, when the changefeed resumes, it will re-emit the messages that were not confirmed in the next checkpoint. The changefeed may not re-emit every message, but it will ensure each change is emitted at least once.
-
+
### Changefeed encounters an error
diff --git a/src/current/v25.3/changefeed-monitoring-guide.md b/src/current/v25.3/changefeed-monitoring-guide.md
index 28d0cec6af5..defa5f110f3 100644
--- a/src/current/v25.3/changefeed-monitoring-guide.md
+++ b/src/current/v25.3/changefeed-monitoring-guide.md
@@ -22,7 +22,7 @@ The changefeed pipeline contains three main sections that start at the [storage
- [**Processing**](#processing-aggregation-and-encoding): Prepares the change events from the rangefeed into [changefeed messages]({% link {{ page.version.version }}/changefeed-messages.md %}) by encoding messages into the specified [format]({% link {{ page.version.version }}/changefeed-messages.md %}#message-formats).
- [**Sink**](#sink): Delivers changefeed messages to the [downstream sink]({% link {{ page.version.version }}/changefeed-sinks.md %}).
-
+
Where noted in the following sections, you can use changefeed [metrics labels]({% link {{ page.version.version }}/monitor-and-debug-changefeeds.md %}#using-changefeed-metrics-labels) to measure metrics per changefeed.
diff --git a/src/current/v25.3/cloud-storage-authentication.md b/src/current/v25.3/cloud-storage-authentication.md
index 4a7ec43d91d..bdcd2b44969 100644
--- a/src/current/v25.3/cloud-storage-authentication.md
+++ b/src/current/v25.3/cloud-storage-authentication.md
@@ -131,7 +131,7 @@ For example, to configure a user to assume an IAM role that allows a bulk operat
The `sts:AssumeRole` permission allows the user to obtain a temporary set of security credentials that gives them access to an S3 bucket to which they would not have access with their user-based permissions.
-
+
1. Return to your IAM role's **Summary** page, and click on the **Trust Relationships** tab. Add a trust policy into the role, which will define the users that can assume the role.
@@ -304,7 +304,7 @@ Once you have an identity role that your CockroachDB nodes can assume, you can c
Copy the ARN of the identity role. In the Amazon management console, click on **IAM**, then **Roles**, and select the name of your identity role. From the **Summary** page, copy your ARN. You will need this when configuring the Trust Policy for the IAM role to be assumed.
-
+
See [Step 2. Trust the identity role](#step-2-trust-the-identity-role) to add this ARN to an operation role's Trust Policy.
@@ -318,21 +318,21 @@ If you already have the role that contains permissions for the operation, ensure
1. To create an operation role, click **Create Role** under the **Roles** menu. Select **Custom trust policy** and then add the ARN of your identity role (from [Step 1](#step-1-set-up-the-identity-role)) to the JSON by clicking `Principal`. This will open a dialog box. Select **IAM Roles** for **Principal Type** and paste the ARN. Click **Add Principal** and then **Next**.
-
+
2. On the **Add Permissions** page, search for the permission policies that the role will need to complete the bulk operation.
-
+
Or, use the **Create Policy** button to define the required permissions. You can use the visual editor to select the service, actions, and resources.
-
+
Or, use the JSON tab to specify the policy. For the JSON editor, see [Storage Permissions]({% link {{ page.version.version }}/use-cloud-storage.md %}#storage-permissions) for an example and detail on the minimum permissions required for each operation to complete. Click **Next**.
3. Finally, give the role a name on the **Name, review, and create** page. The following screenshot shows the selected trust policy and permissions:
-
+
### Step 3. Run the operation by assuming the role
@@ -447,7 +447,7 @@ For this example, both service accounts have already been created. If you need t
- In [Google's Cloud console](https://console.cloud.google.com/getting-started), click **IAM & Admin**, **Roles**, and then **Create Role**.
- Add a title for the role and then click **Add Permissions**. Filter for the permissions required for the bulk operation. For example, if you want to enable service account B to run a changefeed, your role will include the `storage.objects.create` permission. See the [Storage permissions]({% link {{ page.version.version }}/use-cloud-storage.md %}#storage-permissions) section on this page for details on the minimum permissions each CockroachDB bulk operation requires.
-
+
{{site.data.alerts.callout_success}}
Alternately, you can use the [gcloud CLI](https://cloud.google.com/sdk/gcloud/reference/iam/roles/create) to create roles.
@@ -457,14 +457,14 @@ For this example, both service accounts have already been created. If you need t
- Go to the **Cloud Storage** menu and select the bucket. In the bucket's menu, click **Grant Access**.
- Add the service account to the **Add principals** box and select the name of the role you created in step 1 under **Assign roles**.
-
+
1. Next, service account B needs the "Service Account Token Creator" role for service account A. This enables service account B to create short-lived tokens for A.
- Go to the **Service Accounts** menu in the Google Cloud Console.
- Select service account B from the list, then the **Permissions** tab, and click **Grant Access** under **Principals with access to this service account**.
- Enter the name of service account A into the **New principals** box and select "Service Account Token Creator" under the **Assign roles** dropdown. Click **Save** to complete.
-
+
1. Finally, you will run the bulk operation from your CockroachDB cluster. If you're using [specified authentication](#google-cloud-storage-specified), pass in the GCS bucket's URL with the IAM user's `CREDENTIALS`. If you're using [implicit authentication](#google-cloud-storage-implicit), specify `AUTH=IMPLICIT` instead. For assuming the role, pass the assumed role's service account name, which you can copy from the **Service Accounts** page:
@@ -600,13 +600,13 @@ See [Step 2](#step-2-create-the-operation-service-account) to create an operatio
b. In the **Grant this service account access to project** section, select the role you require for the bulk operation, e.g., "Storage Object Creator". See [Storage Permissions]({% link {{ page.version.version }}/use-cloud-storage.md %}#storage-permissions) for detail on the minimum permissions required for each operation to complete. Click **Continue**.
-
+
c. In the **Grant users access to this service account** section, paste the name of the identity service account. Then, click **Done**.
-
+
### Step 3. Give the identity service account the token creator role
@@ -616,7 +616,7 @@ Next, the operation service account needs to contain the "Service Account Token
1. Select the operation service account from the list, then the **Permissions** tab, and click **Grant Access** under **Principals with access to this service account**.
1. Enter the name of the identity service account into the **New principals** box and select "Service Account Token Creator" under the **Assign roles** dropdown. Click **Save** to complete.
-
+
### Step 4. Run the operation by assuming the service account
diff --git a/src/current/v25.3/configure-replication-zones.md b/src/current/v25.3/configure-replication-zones.md
index 02752f39314..090ea83714b 100644
--- a/src/current/v25.3/configure-replication-zones.md
+++ b/src/current/v25.3/configure-replication-zones.md
@@ -129,7 +129,7 @@ The following diagram presents the same set of schema objects as the previous ou
Each box represents a schema object in the zone configuration inheritance hierarchy. Each solid line ends in an arrow that points from a parent object to its child object, which will inherit the parent's values unless those values are changed at the child level. The dotted lines between partitions and sub-partitions represent the known limitation mentioned previously that sub-partitions do not inherit their values from their parent partitions. Instead, sub-partitions inherit their values from the parent table. For more information about this limitation, see [cockroachdb/cockroach#75862](https://github.com/cockroachdb/cockroach/issues/75862).
-
+
#### Zone config inheritance - example SQL session
diff --git a/src/current/v25.3/data-resilience.md b/src/current/v25.3/data-resilience.md
index 73306f0289f..aaf0b8ecd8d 100644
--- a/src/current/v25.3/data-resilience.md
+++ b/src/current/v25.3/data-resilience.md
@@ -9,7 +9,7 @@ CockroachDB provides built-in [**high availability (HA)**](#high-availability) f
- HA features ensure continuous access to data without interruption even in the presence of [failures]({% link {{ page.version.version }}/demo-cockroachdb-resilience.md %}) or disruptions to maximize uptime.
- DR tools allow for recovery from major incidents to minimize downtime and data loss.
-
+
You can balance required SLAs and recovery objectives with the cost and management of each of these features to build a resilient deployment.
diff --git a/src/current/v25.3/datadog.md b/src/current/v25.3/datadog.md
index e57514ae471..529c925956e 100644
--- a/src/current/v25.3/datadog.md
+++ b/src/current/v25.3/datadog.md
@@ -30,7 +30,7 @@ Before you can follow the steps presented in this tutorial, you must have:
To enable the CockroachDB check for your installed Datadog Agent, navigate to the [Integrations page](https://app.datadoghq.com/account/settings#integrations) and find CockroachDB in the list of available integrations. Hover over the icon and click **+ Install**.
-
+
Note that you must restart the Datadog Agent for the change to take effect. CockroachDB will then be listed as an installed integration.
@@ -122,11 +122,11 @@ cockroachdb (1.6.0)
Open your Datadog [Dashboard List](https://app.datadoghq.com/dashboard/lists) and click on `CockroachDB Overview`:
-
+
This sample dashboard presents metrics on cluster availability, query performance, and resource usage:
-
+
{{site.data.alerts.callout_info}}
If you wish to customize your CockroachDB dashboard, it's recommended that you clone the default `CockroachDB Overview` dashboard before adding and removing widgets. If you leave the default dashboard intact, Datadog will update it when new versions of the integration's dashboard are released.
@@ -152,7 +152,7 @@ cockroach workload run movr --duration=5m 'postgresql://root@localhost:26257?ssl
The query metrics will appear on the dashboard:
-
+
## Step 6. Add monitoring and alerting
@@ -162,14 +162,14 @@ Select **Threshold Alert** as the detection method. You can use this option to c
The example alert below will trigger when [a node has less than 15% of storage capacity remaining]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#node-is-running-low-on-disk-space):
-
+
- `cockroachdb.capacity.available` is divided by `cockroachdb.capacity.total` to determine the fraction of available capacity on the node's [store]({% link {{ page.version.version }}/architecture/storage-layer.md %}) (the directory on each node where CockroachDB reads and writes its data).
- The alert threshold is set to `0.15`.
The timeseries graph at the top of the page indicates the configured metric and threshold:
-
+
## Known limitations
diff --git a/src/current/v25.3/dbeaver.md b/src/current/v25.3/dbeaver.md
index ee645c9ef67..accad7a3811 100644
--- a/src/current/v25.3/dbeaver.md
+++ b/src/current/v25.3/dbeaver.md
@@ -32,19 +32,19 @@ To work through this tutorial, take the following steps:
1. Start DBeaver, and select **Database > New Connection** from the menu. In the dialog that appears, select **CockroachDB** from the list.
-
+
1. Click **Next**. The **Connection Settings** dialog displays.
1. In the **Database** field, enter `movr`.
-
+
## Step 2. Update the connection settings
1. Click the **SSL** tab.
-
+
1. Check the **Use SSL** checkbox as shown, and fill in the text areas as follows:
- **Root certificate**: Use the `ca.crt` file you generated for your secure cluster.
@@ -61,13 +61,13 @@ To work through this tutorial, take the following steps:
1. Click **Test Connection ...**. If you need a driver, the following dialog displays:
-
+
1. Click **Download**.
After the driver downloads, if the connection was successful, you will see a **Connected** dialog.
-
+
1. Click **OK** to dismiss the dialog.
@@ -75,7 +75,7 @@ To work through this tutorial, take the following steps:
Expand the **movr** database node and navigate to the **rides** table.
-
+
For more information about using DBeaver, see the [DBeaver documentation](https://dbeaver.io/docs/).
diff --git a/src/current/v25.3/dbmarlin.md b/src/current/v25.3/dbmarlin.md
index 4f754a8966c..857dcd188c2 100644
--- a/src/current/v25.3/dbmarlin.md
+++ b/src/current/v25.3/dbmarlin.md
@@ -50,7 +50,7 @@ Follow the steps in [Instance Dashboard](https://docs.dbmarlin.com/docs/Using-DB
When you open the dashboard you'll see :
-
+
## See also
diff --git a/src/current/v25.3/demo-automatic-cloud-migration.md b/src/current/v25.3/demo-automatic-cloud-migration.md
index c3003bd87e8..78950787f84 100644
--- a/src/current/v25.3/demo-automatic-cloud-migration.md
+++ b/src/current/v25.3/demo-automatic-cloud-migration.md
@@ -171,7 +171,7 @@ Now open the DB Console at `http://localhost:8080` and click **Metrics** in the
Scroll down a bit and hover over the **Replicas per Node** graph. Because CockroachDB replicates each piece of data 3 times by default, the replica count on each of your 3 nodes should be identical:
-
+
## Step 7. Add 3 nodes on "cloud 2"
@@ -223,7 +223,7 @@ $ cockroach start \
Back on the **Overview** dashboard in DB Console, hover over the **Replicas per Node** graph again. Because you used [`--locality`]({% link {{ page.version.version }}/configure-replication-zones.md %}#descriptive-attributes-assigned-to-nodes) to specify that nodes are running on 2 clouds, you'll see an approximately even number of replicas on each node, indicating that CockroachDB has automatically rebalanced replicas across both simulated clouds:
-
+
Note that it takes a few minutes for the DB Console to show accurate per-node replica counts on hover. This is why the new nodes in the screenshot above show 0 replicas. However, the graph lines are accurate, and you can click **View node list** in the **Summary** area for accurate per-node replica counts as well.
@@ -242,7 +242,7 @@ $ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING constraints=
Back on the **Overview** dashboard in the DB Console, hover over the **Replicas per Node** graph again. Very soon, you'll see the replica count double on nodes 4, 5, and 6 and drop to 0 on nodes 1, 2, and 3:
-
+
This indicates that all data has been migrated from cloud 1 to cloud 2. In a real cloud migration scenario, at this point you would update the load balancer to point to the nodes on cloud 2 and then stop the nodes on cloud 1. But for the purpose of this local simulation, there's no need to do that.
diff --git a/src/current/v25.3/demo-cockroachdb-resilience.md b/src/current/v25.3/demo-cockroachdb-resilience.md
index d07d3f9df57..570398d6bf7 100644
--- a/src/current/v25.3/demo-cockroachdb-resilience.md
+++ b/src/current/v25.3/demo-cockroachdb-resilience.md
@@ -187,25 +187,25 @@ Initially, the workload creates a new database called `ycsb`, creates the table
1. To check the SQL queries getting executed, click **Metrics** on the left, and hover over the **SQL Statements** graph at the top:
-
+
1. To check the client connections from the load generator, select the **SQL** dashboard and hover over the **Open SQL Sessions** graph:
-
+
You'll notice 3 client connections from the load generator. If you want to check that HAProxy balanced each connection to a different node, you can change the **Graph** dropdown from **Cluster** to specific nodes.
1. To see more details about the `ycsb` database and the `public.usertable` table, click **Databases** in the upper left and click **ycsb**:
-
+
You can also view the schema and other table details of `public.usertable` by clicking the table name:
-
+
1. By default, CockroachDB replicates all data 3 times and balances it across all nodes. To see this balance, click **Overview** and check the replica count across all nodes:
-
+
## Step 5. Simulate a single node failure
@@ -246,7 +246,7 @@ When a node fails, the cluster waits for the node to remain offline for 5 minute
Go back to the DB Console, click **Metrics** on the left, and verify that the cluster as a whole continues serving data, despite one of the nodes being unavailable and marked as **Suspect**:
-
+
This shows that when all ranges are replicated 3 times (the default), the cluster can tolerate a single node failure because the surviving nodes have a majority of each range's replicas (2/3).
@@ -254,7 +254,7 @@ This shows that when all ranges are replicated 3 times (the default), the cluste
Click **Overview** on the left:
-
+
Because you reduced the time it takes for the cluster to consider the down node dead, after 1 minute or so, the cluster will consider the down node "dead", and you'll see the replica count on the remaining nodes increase and the number of under-replicated ranges decrease to 0. This shows the cluster repairing itself by re-replicating missing replicas.
@@ -286,7 +286,7 @@ To be able to tolerate 2 of 5 nodes failing simultaneously without any service i
1. In the DB Console **Overview** dashboard, watch the replica count increase and even out across all 6 nodes:
-
+
This shows the cluster up-replicating so that each range has 5 replicas, one on each node.
@@ -303,7 +303,7 @@ kill -TERM {process IDs}
1. Click **Overview** on the left, and verify the state of the cluster:
-
+
1. To verify that the cluster still serves data, use the `cockroach sql` command to interact with the cluster.
diff --git a/src/current/v25.3/demo-low-latency-multi-region-deployment.md b/src/current/v25.3/demo-low-latency-multi-region-deployment.md
index 2025a5d8c49..62ae55d060d 100644
--- a/src/current/v25.3/demo-low-latency-multi-region-deployment.md
+++ b/src/current/v25.3/demo-low-latency-multi-region-deployment.md
@@ -121,7 +121,7 @@ node 9:
And here is the view on the **Network Latency Page**, which shows which nodes are in which cluster regions:
-
+
You can see by referring back and forth between `\demo ls` and the **Network Latency Page** that the cluster has the following region/node/port correspondences, which we can use to determine how to connect MovR from various regions:
@@ -251,13 +251,13 @@ Now that you have load hitting the cluster from different regions, check how the
In the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) at http://127.0.0.1:8080, click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) on the left and hover over the [**Service Latency: SQL, 99th percentile**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}#service-latency-sql-99th-percentile) timeseries graph. You should see the effects of network latency on this workload.
-
+
For each of the 3 nodes that you are pointing the movr workload at, the max latency of 99% of queries are in the 1-2 seconds range. The SQL latency is high because of the network latency between regions.
To see the network latency between any two nodes in the cluster, click [**Network Latency**]({% link {{ page.version.version }}/ui-network-latency-page.md %}) in the left-hand navigation.
-
+
Within a single region, round-trip latency is under 6 ms (milliseconds). Across regions, round-trip latency is significantly higher.
@@ -314,7 +314,7 @@ As the multi-region schema changes complete, you should see changes to the follo
The small demo cluster used in this example is essentially in a state of overload from the start. The performance numbers shown here only reflect the direction of the performance improvements. You should expect to see much better absolute performance numbers than those described here [in a production deployment]({% link {{ page.version.version }}/recommended-production-settings.md %}).
{{site.data.alerts.end}}
-
+
## See also
diff --git a/src/current/v25.3/demo-serializable.md b/src/current/v25.3/demo-serializable.md
index f867f225b6a..246bb904952 100644
--- a/src/current/v25.3/demo-serializable.md
+++ b/src/current/v25.3/demo-serializable.md
@@ -33,7 +33,7 @@ When write skew happens, a transaction reads something, makes a decision based o
### Schema
-
+
## Step 1. Set up the scenario on PostgreSQL
diff --git a/src/current/v25.3/deploy-cockroachdb-on-aws.md b/src/current/v25.3/deploy-cockroachdb-on-aws.md
index 58f4e2fe26b..c9839932810 100644
--- a/src/current/v25.3/deploy-cockroachdb-on-aws.md
+++ b/src/current/v25.3/deploy-cockroachdb-on-aws.md
@@ -56,7 +56,7 @@ CockroachDB is supported in all [AWS regions](https://docs.aws.amazon.com/AWSEC2
In this basic deployment, 3 CockroachDB nodes are each deployed on an Amazon EC2 instance across 3 availability zones. These are grouped within a single VPC and security group. Users are routed to the cluster via [Amazon Route 53](https://aws.amazon.com/route53/) (which is not used in this tutorial) and a load balancer.
-
+
## Step 1. Create instances
diff --git a/src/current/v25.3/deploy-cockroachdb-with-kubernetes-openshift.md b/src/current/v25.3/deploy-cockroachdb-with-kubernetes-openshift.md
index 0ea67eb1c8d..89092692536 100644
--- a/src/current/v25.3/deploy-cockroachdb-with-kubernetes-openshift.md
+++ b/src/current/v25.3/deploy-cockroachdb-with-kubernetes-openshift.md
@@ -67,7 +67,7 @@ This article assumes you have already installed the OpenShift Container Platform
1. Enter "cockroach" in the search box. There are two tiles called **CockroachDB Operator**. Find the tile _without_ the `Marketplace` label (which requires a subscription).
-
+
Click the **CockroachDB Operator** tile and then **Install**.
@@ -93,7 +93,7 @@ This article assumes you have already installed the OpenShift Container Platform
1. In the **CockroachDB Operator** tile, click **Create instance**.
-
+
1. Make sure **CockroachDB Version** is set to a valid CockroachDB version. For a list of compatible image names, see `spec.containers.env` in the [pulic operator manifest](https://raw.github.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml) on GitHub.
@@ -101,7 +101,7 @@ This article assumes you have already installed the OpenShift Container Platform
1. Navigate to **Workloads** > **Pods** and observe the pods being created:
-
+
1. You can also use the command line to view the pods:
@@ -317,7 +317,7 @@ To run a sample [CockroachDB workload]({% link {{ page.version.version }}/cockro
1. Select one of the CockroachDB pods on the **Pods** page and click **Logs**. This will reveal the JDBC URL that your application can use to connect to CockroachDB:
-
+
## Step 8. Delete the cluster
@@ -327,7 +327,7 @@ If you want to continue using this cluster, see the documentation on [configurin
1. Go to the **Installed Operators** page and find the cluster name of the CockroachDB cluster. Select **Delete CrdbCluster** from the menu.
-
+
This will delete the CockroachDB cluster being run by the operator. It will *not* delete:
diff --git a/src/current/v25.3/detect-hotspots.md b/src/current/v25.3/detect-hotspots.md
index 361d1ce6a45..0b18a5ccb2b 100644
--- a/src/current/v25.3/detect-hotspots.md
+++ b/src/current/v25.3/detect-hotspots.md
@@ -15,7 +15,7 @@ This page provides practical guidance on identifying common [hotspots]({% link {
The following sections provide detailed instructions for identifying potential hotspots and applying mitigations.
-
+
## Step 1. Check for a node outlier in metrics
@@ -27,7 +27,7 @@ To identify a [hotspot]({% link {{ page.version.version }}/understand-hotspots.m
1. Create a custom chart to monitor the `kv.concurrency.latch_conflict_wait_durations-avg` metric, which tracks time spent on [latch acquisition]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#latch-manager) waiting for conflicts with other latches. For example, a [sequence]({% link {{ page.version.version }}/understand-hotspots.md %}#hot-sequence) that writes to the same row must wait to acquire the latch.
1. To display the metric per node, select the `PER NODE/STORE` checkbox as shown:
-
+
1. Is there a node with a maximum value that is a clear outlier in the cluster for the latch conflict wait durations metric?
- If **Yes**, note the ID of the [hot node]({% link {{ page.version.version }}/understand-hotspots.md %}#hot-node) and the time period when it was hot. Proceed to check for a [`popular key detected `log](#a-popular-key-detected).
@@ -39,7 +39,7 @@ To identify a [hotspot]({% link {{ page.version.version }}/understand-hotspots.m
1. Monitor the [**CPU Percent** graph]({% link {{ page.version.version }}/ui-hardware-dashboard.md %}#cpu-percent). CPU usage typically increases with traffic volume.
1. Check if the CPU usage of the hottest node is 20% or more above the cluster average. For example, node `n5`, represented by the green line in the following **CPU Percent** graph, hovers at around 87% at time 17:35 compared to other nodes that hover around 20% to 25%.
-
+
1. Is there a node with a maximum value that is a clear outlier in the cluster for the CPU percent metric?
- If **Yes**, note the ID of the [hot node]({% link {{ page.version.version }}/understand-hotspots.md %}#hot-node) and the time period when it was hot. Proceed to check for a [`popular key detected `log](#a-popular-key-detected).
diff --git a/src/current/v25.3/disaster-recovery-overview.md b/src/current/v25.3/disaster-recovery-overview.md
index e919d0dd4b6..abe5e12903a 100644
--- a/src/current/v25.3/disaster-recovery-overview.md
+++ b/src/current/v25.3/disaster-recovery-overview.md
@@ -12,7 +12,7 @@ As you evaluate CockroachDB's disaster recovery features, consider your organiza
When you use backups, RPO and RTO can be visualized as follows:
-
+
{{site.data.alerts.callout_info}}
For an overview of resiliency features in CockroachDB, refer to [Data Resilience]({% link {{ page.version.version }}/data-resilience.md %}).
diff --git a/src/current/v25.3/enable-node-map.md b/src/current/v25.3/enable-node-map.md
index 8d8dea4b7b9..d9028d36b53 100644
--- a/src/current/v25.3/enable-node-map.md
+++ b/src/current/v25.3/enable-node-map.md
@@ -15,7 +15,7 @@ This page guides you through the process of setting up and enabling the Node Map
-
+
## Set up and enable the Node Map
@@ -99,17 +99,17 @@ To start a new cluster with the correct `--locality` flags:
1. [Access the DB Console]({% link {{ page.version.version }}/ui-overview.md %}#db-console-access).
-1. If the node list displays, click the selector
and select **Node Map**.
+1. If the node list displays, click the selector
and select **Node Map**.
The following page is displayed:
-
+
### Step 2. Set the Enterprise license and refresh the DB Console
The Node Map should now be displaying the highest-level localities you defined:
-
+
{{site.data.alerts.callout_info}}
To be displayed on the world map, localities must be assigned a corresponding latitude and longitude.
@@ -140,7 +140,7 @@ For the latitudes and longitudes of AWS, Azure, and Google Cloud regions, see [L
Refresh the DB Console to see the updated Node Map:
-
+
### Step 5. Navigate the Node Map
@@ -148,11 +148,11 @@ To navigate to Node 2, which is in datacenter `us-east-1a` in the `us-east-1` re
1. Click the map component marked as **region=us-east-1** on the Node Map. The [locality component]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}#locality-component) for the datacenter is displayed.
-
+
1. Click the datacenter component marked as **datacenter=us-east-1a**. The individual [node components]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}#node-component) are displayed.
-
+
1. To navigate back to the cluster view, either click **Cluster** in the breadcrumb trail at the top of the Node Map, or click **Up to REGION=US-EAST-1** and then click **Up to CLUSTER** in the lower left-hand side of the Node Map.
@@ -165,7 +165,7 @@ To navigate to Node 2, which is in datacenter `us-east-1a` in the `us-east-1` re
To verify all requirements, navigate to the [**Localities**]({% link {{ page.version.version }}/ui-debug-pages.md %}#configuration) debug page in the DB Console.
-
+
The **Localities** debug page displays the following:
diff --git a/src/current/v25.3/explain.md b/src/current/v25.3/explain.md
index 4d7d62e83a1..1d62a9312a3 100644
--- a/src/current/v25.3/explain.md
+++ b/src/current/v25.3/explain.md
@@ -886,7 +886,7 @@ The output of `EXPLAIN (DISTSQL)` is a URL for a graphical diagram that displays
To view the [DistSQL plan diagram]({% link {{ page.version.version }}/explain-analyze.md %}#distsql-plan-diagram), open the URL. You should see the following:
-
+
To include the data types of the input columns in the physical plan, use `EXPLAIN(DISTSQL, TYPES)`:
@@ -903,7 +903,7 @@ EXPLAIN (DISTSQL, TYPES) SELECT l_shipmode, AVG(l_extendedprice) FROM lineitem G
Open the URL. You should see the following:
-
+
### Find the indexes and key ranges a query uses
diff --git a/src/current/v25.3/geojson.md b/src/current/v25.3/geojson.md
index 859223f7ad1..6ee553e9f0c 100644
--- a/src/current/v25.3/geojson.md
+++ b/src/current/v25.3/geojson.md
@@ -189,7 +189,7 @@ The JSON below is modified from the output above: it is grouped into a GeoJSON `
Here is the geometry described above as shown on [geojson.io](http://geojson.io):
-
+
## See also
diff --git a/src/current/v25.3/geoserver.md b/src/current/v25.3/geoserver.md
index 8753fd6be84..27371d9d2c6 100644
--- a/src/current/v25.3/geoserver.md
+++ b/src/current/v25.3/geoserver.md
@@ -174,7 +174,7 @@ In the row for the `roads` layer, click the **OpenLayers** button under the **Co
Your browser should open a new tab with the title **OpenLayers map preview**. It should show a map view that looks like the following:
-
+
## See also
diff --git a/src/current/v25.3/how-does-a-changefeed-work.md b/src/current/v25.3/how-does-a-changefeed-work.md
index b58be3b1303..092fdd37972 100644
--- a/src/current/v25.3/how-does-a-changefeed-work.md
+++ b/src/current/v25.3/how-does-a-changefeed-work.md
@@ -9,7 +9,7 @@ When a changefeed that will emit changes to a sink is started on a node, that no
Each node uses its _aggregator processors_ to send back checkpoint progress to the coordinator, which gathers this information to update the _high-water mark timestamp_. The high-water mark acts as a checkpoint for the changefeed’s job progress, and guarantees that all changes before (or at) the timestamp have been emitted. In the unlikely event that the changefeed’s coordinating node were to fail during the job, that role will move to a different node and the changefeed will restart from the last checkpoint. If restarted, the changefeed may [re-emit messages]({% link {{ page.version.version }}/changefeed-messages.md %}#duplicate-messages) starting at the high-water mark time to the current time. Refer to [Ordering Guarantees]({% link {{ page.version.version }}/changefeed-messages.md %}#ordering-and-delivery-guarantees) for detail on CockroachDB's at-least-once-delivery-guarantee and how per-key message ordering is applied.
-
+
With [`resolved`]({% link {{ page.version.version }}/create-changefeed.md %}#resolved) specified when a changefeed is started, the coordinator will send the resolved timestamp (i.e., the high-water mark) to each endpoint in the sink. For example, when using [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) this will be sent as a message to each partition; for [cloud storage]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink), this will be emitted as a resolved timestamp file.
diff --git a/src/current/v25.3/kibana.md b/src/current/v25.3/kibana.md
index c20560244ee..75e2be2c3de 100644
--- a/src/current/v25.3/kibana.md
+++ b/src/current/v25.3/kibana.md
@@ -83,11 +83,11 @@ Open the Kibana web interface and click **Dashboard**.
Search for the CockroachDB dashboard:
-
+
Click the dashboard title to open the dashboard, which presents metrics on replicas and query performance:
-
+
## Step 4. Run a sample workload
@@ -109,7 +109,7 @@ cockroach workload run movr --duration=5m 'postgresql://root@localhost:26257?ssl
Click **Refresh**. The query metrics will appear on the dashboard:
-
+
## See also
diff --git a/src/current/v25.3/log-sql-activity-to-datadog.md b/src/current/v25.3/log-sql-activity-to-datadog.md
index 2de0943083b..103a75e8930 100644
--- a/src/current/v25.3/log-sql-activity-to-datadog.md
+++ b/src/current/v25.3/log-sql-activity-to-datadog.md
@@ -145,7 +145,7 @@ Each `sampled_query` and `sampled_transaction` event has an `event.TransactionID
1. Navigate to [**Datadog > Logs**](https://app.datadoghq.com/logs).
1. Search for `@event.EventType:(sampled_query OR sampled_transaction)` to see the logs for the query and transaction events that are emitted. For example:
-
+
## See also
diff --git a/src/current/v25.3/logical-data-replication-overview.md b/src/current/v25.3/logical-data-replication-overview.md
index 8b30500aa76..2ec95433be4 100644
--- a/src/current/v25.3/logical-data-replication-overview.md
+++ b/src/current/v25.3/logical-data-replication-overview.md
@@ -29,13 +29,13 @@ For a comparison of CockroachDB high availability and resilience features and to
Maintain [high availability]({% link {{ page.version.version }}/data-resilience.md %}#high-availability) and resilience to region failures with a two-datacenter topology. You can run bidirectional LDR to ensure [data resilience]({% link {{ page.version.version }}/data-resilience.md %}) in your deployment, particularly in datacenter or region failures. If you set up two single-region clusters, in LDR, both clusters can receive application reads and writes with low, single-region write latency. Then, in a datacenter, region, or cluster outage, you can redirect application traffic to the surviving cluster with [low downtime]({% link {{ page.version.version }}/data-resilience.md %}#high-availability). In the following diagram, the two single-region clusters are deployed in US East and West to provide low latency for that region. The two LDR jobs ensure that the tables on both clusters will reach eventual consistency.
-
+
### Achieve workload isolation between clusters
Isolate critical application workloads from non-critical application workloads. For example, you may want to run jobs like [changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) or [backups]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) from one cluster to isolate these jobs from the cluster receiving the principal application traffic.
-
+
## Features
diff --git a/src/current/v25.3/map-sql-activity-to-app.md b/src/current/v25.3/map-sql-activity-to-app.md
index 13723f18c88..51d9c4953d0 100644
--- a/src/current/v25.3/map-sql-activity-to-app.md
+++ b/src/current/v25.3/map-sql-activity-to-app.md
@@ -22,7 +22,7 @@ SET application_name = movr_app;
Once you set the application name, the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) lets you [filter database workloads by application name]({% link {{ page.version.version }}/ui-statements-page.md %}#filter).
-
+
If parts of your applications or known microservices are experiencing performance degradation, you can filter for the database workload tracing statements and transactions back to that part of your application directly in the DB Console. You can quickly identify whether there were database performance problems and if so, troubleshoot the issue using [SQL observability touch points](#trace-sql-activity-using-metrics) in the DB Console.
diff --git a/src/current/v25.3/monitor-and-analyze-transaction-contention.md b/src/current/v25.3/monitor-and-analyze-transaction-contention.md
index 7cbdb016fde..3fac08a9dba 100644
--- a/src/current/v25.3/monitor-and-analyze-transaction-contention.md
+++ b/src/current/v25.3/monitor-and-analyze-transaction-contention.md
@@ -40,7 +40,7 @@ In the DB Console, in the cluster view, this rate is averaged across all nodes,
The following image from the DB Console was taken from a cluster running more than 50,000 queries per second with around 2 contention events per second. This contention is unlikely to have an impact on the workload.
-
+
### SQL Activity pages
@@ -54,7 +54,7 @@ The [**Contention Time** column]({% link {{ page.version.version }}/ui-statement
The following image shows the **Statements** page with the top 3 statement fingerprints by Contention Time in a cluster containing the test data from the [Analyze using `crdb_internal` tables](#analyze-using-crdb_internal-tables) section.
-
+
#### Transactions page
@@ -64,7 +64,7 @@ The [**Contention Time** column]({% link {{ page.version.version }}/ui-transacti
The following image shows the **Transactions** page with the top 3 transactions fingerprints by Contention Time in a cluster containing the test data in the [Analyze using `crdb_internal` tables](#analyze-using-crdb_internal-tables) section.
-
+
### Insights page
@@ -377,15 +377,15 @@ This section applies a variation of the previously described analysis process to
Review the [DB Console Metrics]({% link {{ page.version.version }}/ui-overview.md %}#metrics) graphs to get a high-level understanding of the contention events. The [SQL Statement Errors]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#sql-statement-errors) graph shows an increase of errors during the time period of 9:16 to 9:23 UTC:
-
+
The [SQL Statement Contention]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#sql-statement-contention) graph shows a corresponding increase between 9:16 and 9:23 UTC:
-
+
The [Transaction Restarts]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#transaction-restarts) graph also shows a corresponding increase between 9:16 and 9:23 UTC:
-
+
These graphs help to understand the incident at a high-level, but not the specific transactions that are involved. To understand that, query the `crdb_internal.transaction_contention_events` table.
diff --git a/src/current/v25.3/monitor-cockroachdb-kubernetes.md b/src/current/v25.3/monitor-cockroachdb-kubernetes.md
index e53bde93041..a2adae4ab1b 100644
--- a/src/current/v25.3/monitor-cockroachdb-kubernetes.md
+++ b/src/current/v25.3/monitor-cockroachdb-kubernetes.md
@@ -167,11 +167,11 @@ If you're on Hosted GKE, before starting, make sure the email address associated
1. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
-
+
1. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
-
+
{{site.data.alerts.callout_success}}
Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the DB Console]({% link {{ page.version.version }}/deploy-cockroachdb-with-kubernetes.md %}#step-4-access-the-db-console){% else %}[Access the DB Console]({% link {{ page.version.version }}/deploy-cockroachdb-with-kubernetes.md %}#step-4-access-the-db-console){% endif %} and then point your browser to the [Prometheus endpoint]({% link {{ page.version.version }}/prometheus-endpoint.md %}).
@@ -242,11 +242,11 @@ Active monitoring helps you spot problems early, but it is also essential to sen
1. Go to http://localhost:9093 in your browser. The screen should look like this:
-
+
1. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this:
-
+
1. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
@@ -262,11 +262,11 @@ Active monitoring helps you spot problems early, but it is also essential to sen
1. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/rules. The screen should look like this:
-
+
1. Verify that the `TestAlertManager` example alert is firing by opening http://localhost:9090/alerts. The screen should look like this:
-
+
1. To remove the example alert:
diff --git a/src/current/v25.3/monitor-cockroachdb-operator.md b/src/current/v25.3/monitor-cockroachdb-operator.md
index e44c73c252d..2d0664e4e4b 100644
--- a/src/current/v25.3/monitor-cockroachdb-operator.md
+++ b/src/current/v25.3/monitor-cockroachdb-operator.md
@@ -105,11 +105,11 @@ If you're on Hosted GKE, before starting, make sure the email address associated
1. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
-
+
1. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
-
+
{{site.data.alerts.callout_info}}
Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in [Access the DB Console]({% link {{ page.version.version }}/deploy-cockroachdb-with-cockroachdb-operator.md %}#step-4-access-the-db-console) and then point your browser to [http://localhost:8080/_status/vars](http://localhost:8080/_status/vars).
@@ -174,11 +174,11 @@ Active monitoring helps you spot problems early, but it is also essential to sen
1. Go to [http://localhost:9093](http://localhost:9093/) in your browser. The screen should look like this:
-
+
1. Ensure that the Alertmanagers are visible to Prometheus by opening [http://localhost:9090/status](http://localhost:9090/status). The screen should look like this:
-
+
1. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
@@ -193,11 +193,11 @@ Active monitoring helps you spot problems early, but it is also essential to sen
1. Ensure that the rules are visible to Prometheus by opening [http://localhost:9090/rules](http://localhost:9090/rules). The screen should look like this:
-
+
1. Verify that the `TestAlertManager` example alert is firing by opening [http://localhost:9090/alerts](http://localhost:9090/alerts). The screen should look like this:
-
+
1. To remove the example alert:
1. Use the `kubectl edit` command to open the rules for editing:
diff --git a/src/current/v25.3/monitoring-and-alerting.md b/src/current/v25.3/monitoring-and-alerting.md
index fb1a926937d..348aa3bbe36 100644
--- a/src/current/v25.3/monitoring-and-alerting.md
+++ b/src/current/v25.3/monitoring-and-alerting.md
@@ -143,7 +143,7 @@ The `/_status/vars` metrics endpoint is in Prometheus format and is not deprecat
Several endpoints return raw status meta information in JSON at `http://:/#/debug`. You can investigate and use these endpoints, but note that they are subject to change.
-
+
### Node status command
diff --git a/src/current/v25.3/node-shutdown.md b/src/current/v25.3/node-shutdown.md
index 6465879950a..3f2e463a336 100644
--- a/src/current/v25.3/node-shutdown.md
+++ b/src/current/v25.3/node-shutdown.md
@@ -311,25 +311,25 @@ This can lead to disk utilization imbalance across nodes. **This is expected beh
In this scenario, each range is replicated 3 times, with each replica on a different node:
-
+
If you try to decommission a node, the process will hang indefinitely because the cluster cannot move the decommissioning node's replicas to the other 2 nodes, which already have a replica of each range:
-
+
To successfully decommission a node in this cluster, you need to **add a 4th node**. The decommissioning process can then complete:
-
+
#### 5-node cluster with 3-way replication
In this scenario, like in the scenario above, each range is replicated 3 times, with each replica on a different node:
-
+
If you decommission a node, the process will run successfully because the cluster will be able to move the node's replicas to other nodes without doubling up any range replicas:
-
+
diff --git a/src/current/v25.3/performance-benchmarking-with-tpcc-large.md b/src/current/v25.3/performance-benchmarking-with-tpcc-large.md
index daa4accbb6b..d24ab1d97ce 100644
--- a/src/current/v25.3/performance-benchmarking-with-tpcc-large.md
+++ b/src/current/v25.3/performance-benchmarking-with-tpcc-large.md
@@ -197,7 +197,7 @@ CockroachDB comes with a number of [built-in workloads]({% link {{ page.version.
- To monitor the number of lease transfers, open the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}), select the **Replication** dashboard, hover over the **Range Operations** graph, and check the **Lease Transfers** data point.
- To check the number of snapshots, open the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}), select the **Replication** dashboard, and hover over the **Snapshots** graph.
-
+
## Step 7. Allocate partitions
diff --git a/src/current/v25.3/performance-recipes.md b/src/current/v25.3/performance-recipes.md
index 1758d395baa..bd756e2fabe 100644
--- a/src/current/v25.3/performance-recipes.md
+++ b/src/current/v25.3/performance-recipes.md
@@ -113,7 +113,7 @@ These are indicators that lock contention occurred in the past:
{{site.data.alerts.end}}
- The **SQL Statement Contention** graph ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/metrics-sql.md %}#sql-statement-contention) and [DB Console]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#sql-statement-contention)) is showing spikes over time.
-
+
If a long-running transaction is waiting due to [lock contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention):
@@ -150,7 +150,7 @@ When running under `SERIALIZABLE` isolation, implement [client-side retry handli
If you see many waiting transactions, a single long-running transaction may be blocking transactions that are, in turn, blocking others. In this case, sort the table by **Time Spent Waiting** to find the transaction that has been waiting for the longest amount of time. Unblocking this transaction may unblock the other transactions.
{{site.data.alerts.end}}
Click the transaction's execution ID and view the following transaction execution details:
-
+
- **Last Retry Reason** shows the last [transaction retry error](#transaction-retry-error) received for the transaction, if applicable.
- The details of the **blocking** transaction, directly below the **Contention Insights** section. Click the blocking transaction to view its details.
@@ -158,7 +158,7 @@ When running under `SERIALIZABLE` isolation, implement [client-side retry handli
1. [Identify the **blocking** transaction](#identify-conflicting-transactions) and view its transaction execution details.
1. Click its **Session ID** to open the **Session Details** page.
-
+
1. Click **Cancel Statement** to cancel the **Most Recent Statement** and thus the transaction, or click **Cancel Session** to cancel the session issuing the transaction.
##### Identify transactions and objects that experienced lock contention
@@ -294,7 +294,7 @@ If the [Overview dashboard]({% link {{ page.version.version }}/ui-overview-dashb
In the DB Console, the [Tables List Tab]({% link {{ page.version.version }}/ui-databases-page.md %}#tables-list-tab) of the [Database Details Page]({% link {{ page.version.version }}/ui-databases-page.md %}#database-details-page) for a given database shows the percentage of live data for each table. For example:
-
+
In this example, at `37.3%` the `vehicles` table would be considered to have a low percentage of live data. In the worst cases, the percentage can be `0%`.
diff --git a/src/current/v25.3/performance.md b/src/current/v25.3/performance.md
index 2d19619a625..249ac9abbc4 100644
--- a/src/current/v25.3/performance.md
+++ b/src/current/v25.3/performance.md
@@ -22,7 +22,7 @@ For a refresher on what exactly TPC-C is and how it is measured, see [Benchmark
CockroachDB achieves this performance in [`SERIALIZABLE` isolation]({% link {{ page.version.version }}/demo-serializable.md %}), the strongest isolation level in the SQL standard.
-
+
| Metric | CockroachDB 19.2 | CockroachDB 21.1 |
|-------------------------------------------------+------------------+------------------|
@@ -37,7 +37,7 @@ CockroachDB achieves this performance in [`SERIALIZABLE` isolation]({% link {{ p
CockroachDB has **no theoretical scaling limit** and, in practice, can achieve near-linear performance at 256 nodes. Because the TPC-C results reflect leaps in scale, to test linear scaling, Cockroach Labs ran a simple benchmark named KV 95 (95% point reads, 5% point writes, all uniformly distributed) on AWS `c5d.4xlarge` machines:
-
+
This chart shows that adding nodes increases throughput linearly while holding p50 and p99 latency constant. The concurrency for each scale was chosen to optimize throughput while maintaining an acceptable latency and can be observed in the following table.
diff --git a/src/current/v25.3/physical-cluster-replication-technical-overview.md b/src/current/v25.3/physical-cluster-replication-technical-overview.md
index cec24c3edb8..1f0d79af063 100644
--- a/src/current/v25.3/physical-cluster-replication-technical-overview.md
+++ b/src/current/v25.3/physical-cluster-replication-technical-overview.md
@@ -27,7 +27,7 @@ The stream initialization proceeds as follows:
1. The initial scan runs on the primary and backfills all data from the primary virtual cluster as of the starting timestamp of the replication stream.
1. Once the initial scan is complete, the primary then begins streaming all changes from the point of the starting timestamp.
-
+
#### Start-up sequence with read on standby
@@ -55,7 +55,7 @@ If the primary cluster does not receive replicated time information from the sta
The tracked replicated time and the advancing protected timestamp allow the replication stream to also track _retained time_, which is a timestamp in the past indicating the lower bound that the replication stream could fail over to. The retained time can be up to 4 hours in the past, due to the protected timestamp. Therefore, the _failover window_ for a replication job falls between the retained time and the replicated time.
-
+
_Replication lag_ is the time between the most up-to-date replicated time and the actual time. While the replication keeps as current as possible to the actual time, this replication lag window is where there is potential for data loss.
diff --git a/src/current/v25.3/query-behavior-troubleshooting.md b/src/current/v25.3/query-behavior-troubleshooting.md
index d13106413fc..59f06e4a84d 100644
--- a/src/current/v25.3/query-behavior-troubleshooting.md
+++ b/src/current/v25.3/query-behavior-troubleshooting.md
@@ -58,23 +58,23 @@ You can look more closely at the behavior of a statement by visualizing a [state
1. Click **JSON File** in the Jaeger UI and upload `trace-jaeger.json` from the diagnostics bundle. The trace will appear in the list on the right.
-
+
1. Click the trace to view its details. It is visualized as a collection of spans with timestamps. These may include operations executed by different nodes.
-
+
The full timeline displays the execution time and [execution phases]({% link {{ page.version.version }}/architecture/sql-layer.md %}#sql-parser-planner-executor) for the statement.
1. Click a span to view details for that span and log messages.
-
+
1. You can troubleshoot [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention), for example, by gathering [diagnostics]({% link {{ page.version.version }}/ui-statements-page.md %}#diagnostics) on statements with high latency and looking through the log messages in `trace-jaeger.json` for jumps in latency.
In the following example, the trace shows that there is significant latency between a push attempt on a transaction that is holding a [lock]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#writing) (56.85ms) and that transaction being committed (131.37ms).
-
+
#### Visualize traces sent directly from CockroachDB
@@ -93,7 +93,7 @@ Enabling full tracing is expensive both in terms of CPU usage and memory footpri
1. Go to [`http://localhost:16686`](http://localhost:16686).
1. In the Service field, select **CockroachDB**.
-
+
1. Click **Find Traces**.
diff --git a/src/current/v25.3/query-spatial-data.md b/src/current/v25.3/query-spatial-data.md
index 6a32054d268..678918809cc 100644
--- a/src/current/v25.3/query-spatial-data.md
+++ b/src/current/v25.3/query-spatial-data.md
@@ -279,7 +279,7 @@ We can see that almost half of all of the tornadoes in this outbreak began in Ok
It might be interesting to draw these points on a map. The image below shows the points from the query above drawn as a simple polygon on a map of Oklahoma. The boxes around the polygon show the [spatial index]({% link {{ page.version.version }}/spatial-indexes.md %}) coverings for the polygon.
-
+
(Map data © 2020 Google)
diff --git a/src/current/v25.3/secure-a-cluster.md b/src/current/v25.3/secure-a-cluster.md
index e2819037b89..c54abe3d4a4 100644
--- a/src/current/v25.3/secure-a-cluster.md
+++ b/src/current/v25.3/secure-a-cluster.md
@@ -309,7 +309,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. On the [**Cluster Overview**]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}), notice that three nodes are live, with an identical replica count on each node:
-
+
This demonstrates CockroachDB's [automated replication]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) of data via the Raft consensus protocol.
@@ -319,7 +319,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. Click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:
-
+
1. Use the [**Databases**]({% link {{ page.version.version }}/ui-databases-page.md %}), [**Statements**]({% link {{ page.version.version }}/ui-statements-page.md %}), and [**Jobs**]({% link {{ page.version.version }}/ui-jobs-page.md %}) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.
@@ -349,7 +349,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. Back in the DB Console, despite one node being "suspect", notice the continued SQL traffic:
-
+
1. Restart node 3:
@@ -393,7 +393,7 @@ Adding capacity is as simple as starting more nodes with `cockroach start`.
1. Back on the **Cluster Overview** in the DB Console, you'll now see 5 nodes listed:
-
+
At first, the replica count will be lower for nodes 4 and 5. Very soon, however, you'll see those numbers even out across all nodes, indicating that data is being [automatically rebalanced]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) to utilize the additional capacity of the new nodes.
diff --git a/src/current/v25.3/set-up-logical-data-replication.md b/src/current/v25.3/set-up-logical-data-replication.md
index 1de5e24d359..79f27ffb6c6 100644
--- a/src/current/v25.3/set-up-logical-data-replication.md
+++ b/src/current/v25.3/set-up-logical-data-replication.md
@@ -15,7 +15,7 @@ In this tutorial, you will set up [**logical data replication (LDR)**]({% link {
In the following diagram, **LDR stream 1** creates a unidirectional LDR setup. Introducing **LDR stream 2** extends the setup to bidirectional.
-
+
For more details on use cases, refer to the [Logical Data Replication Overview]({% link {{ page.version.version }}/logical-data-replication-overview.md %}).
diff --git a/src/current/v25.3/show-trace.md b/src/current/v25.3/show-trace.md
index a7dbeac0131..44b96c85267 100644
--- a/src/current/v25.3/show-trace.md
+++ b/src/current/v25.3/show-trace.md
@@ -48,7 +48,7 @@ Concept | Description
Consider a visualization of a trace for one statement as [visualized by Jaeger]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#visualize-statement-traces-in-jaeger). The image shows spans and log messages. You can see names of operations and sub-operations, along with parent-child relationships and timing information, and it's easy to see which operations are executed in parallel.
-
+
## Response
diff --git a/src/current/v25.3/spatial-indexes.md b/src/current/v25.3/spatial-indexes.md
index 5ae01fae12b..ab9775e4af6 100644
--- a/src/current/v25.3/spatial-indexes.md
+++ b/src/current/v25.3/spatial-indexes.md
@@ -56,15 +56,15 @@ Whichever approach to indexing is used, when an object is indexed, a "covering"
Under the hood, CockroachDB uses the [S2 geometry library](https://s2geometry.io/) to divide the space being indexed into a [quadtree](https://wikipedia.org/wiki/Quadtree) data structure with a set number of levels and a data-independent shape. Each node in the quadtree (really, [S2 cell](https://s2geometry.io/devguide/s2cell_hierarchy.html)) represents some part of the indexed space and is divided once horizontally and once vertically to produce 4 child cells in the next level. The following image shows visually how a location (marked in red) is represented using levels of a quadtree:
-
+
Visually, you can think of the S2 library as enclosing a sphere in a cube. We map from points on each face of the cube to points on the face of the sphere. As you can see in the following 2-dimensional picture, there is a projection that occurs in this mapping: the lines entering from the left mark points on the cube face, and are "refracted" by the material of the cube face before touching the surface of the sphere. This projection reduces the distortion that would occur if the points on the cube face were projected straight onto the sphere.
-
+
Next, let's look at a 3-dimensional image that shows the cube and sphere more clearly. Each cube face is mapped to the quadtree data structure mentioned, and each node in the quadtree is numbered using a [Hilbert space-filling curve](https://wikipedia.org/wiki/Hilbert_curve) which preserves locality of reference. In the following image, you can imagine the points of the Hilbert curve on the rear face of the cube being projected onto the sphere in the center. The use of a space-filling curve means that two shapes that are near each other on the sphere are very likely to be near each other on the line that makes up the Hilbert curve. This is good for performance.
-
+
When you index a spatial object, a covering is computed using some number of the cells in the quadtree. The number of covering cells can vary per indexed object by passing special arguments to `CREATE INDEX` that tell CockroachDB how many levels of S2 cells to use. The leaf nodes of the S2 quadtree are at level 30, and for `GEOGRAPHY` measure 1cm across the Earth's surface. By default, `GEOGRAPHY` indexes use up to level 30, and get this level of precision. We also use S2 cell coverings in a slightly different way for `GEOMETRY` indexes. The precision you get there is the bounding length of the `GEOMETRY` index divided by 4^30. For more information, see [Tuning spatial indexes](#tuning-spatial-indexes).
@@ -99,11 +99,11 @@ We will generate coverings for the following geometry object, which describes a
The animated following image shows the S2 coverings that are generated as we increase the `s2_max_cells` parameter from the 1 to 30 (minimum to maximum):
-
+
Here are the same images presented in a grid. You can see that as we turn up the `s2_max_cells` parameter, more work is done by CockroachDB to discover a tighter and tighter covering (that is, a covering using more and smaller cells). The covering for this particular shape reaches a reasonable level of accuracy when `s2_max_cells` reaches 10, and stops improving much past 12.
-
+
### Index tuning parameters
@@ -149,7 +149,7 @@ SELECT ST_AsGeoJSON(st_collect(geom)) FROM tmp_viz;
When you paste the JSON output into [geojson.io](http://geojson.io), it generates the following picture, which shows both the `LINESTRING` and its S2 covering based on the options you passed to `st_s2covering`.
-
+
### Create a spatial index
diff --git a/src/current/v25.3/spatial-tutorial.md b/src/current/v25.3/spatial-tutorial.md
index 72d45bf1fdd..bf735930eab 100644
--- a/src/current/v25.3/spatial-tutorial.md
+++ b/src/current/v25.3/spatial-tutorial.md
@@ -158,7 +158,7 @@ FROM
Paste the result above into and you should see the following map, with gray markers for each loon sighting from the bird survey.
-
+
### (2) What is the total area of Loon sightings?
@@ -561,7 +561,7 @@ WHERE
Paste the result above into and you should see the following map:
-
+
### (10) What is the area of the shape of all bookstore locations that are in the Loon's habitat range within NY state?
@@ -702,7 +702,7 @@ The result is a very large chunk of JSON:
Paste the result above into and you should see the following map:
-
+
### (13) What were the 25 most-commonly-sighted birds in 2019 within 10 miles of the route between Mysteries on Main Street in Johnstown, NY and The Bookstore Plus in Lake Placid, NY?
@@ -1687,7 +1687,7 @@ The `tutorial` database contains the following tables:
Below is an entity-relationship diagram showing the `bookstores` and `bookstore_routes` tables (generated using [DBeaver]({% link {{ page.version.version }}/dbeaver.md %})):
-
+
As mentioned above, the `bookstores` table was created by scraping web data from the [American Booksellers Association website's member directory](https://bookweb.org/member_directory/search/ABAmember). In addition, the `geom` column was constructed by doing some [address geocoding](https://wikipedia.org/wiki/Address_geocoding) that converted each bookstore's address to a lon/lat pair and converted to a spatial object using `ST_MakePoint`. For each bookstore, the script did a bit of parsing and geocoding and ran essentially the following query:
@@ -1751,7 +1751,7 @@ There are multiple ways to do geocoding. You can use REST API-based services or
Meanwhile, the `roads` table has many columns; the most important ones used in this tutorial are `state`, `geom`, `miles`, and `prime_name` (the human-readable name of the road).
-
+
For more information about what the other columns in `roads` mean, see the [full data set description](https://www.sciencebase.gov/catalog/file/get/581d052be4b08da350d524ce?f=__disk__60%2F6b%2F4e%2F606b4e564884da8cca57ffeb229cd817006616e0&transform=1&allowOpen=true).
@@ -1769,7 +1769,7 @@ The tables in the `birds` database are diagrammed below:
- `routes` is a list of ~130 prescribed locations that the birdwatchers helping with the survey visit each year. The `geom` associated with each route is a [Point]({% link {{ page.version.version }}/point.md %}) marking the latitude and longitude of the route's starting point. For details, see the [schema](https://www.sciencebase.gov/catalog/file/get/5ea04e9a82cefae35a129d65?f=__disk__b4%2F2f%2Fcf%2Fb42fcfe28a799db6e8c97200829ea1ebaccbf8ea&transform=1&allowOpen=true) (search for the text "routes.csv").
- `observations` describes the ~85,000 times and places in which birds of various species were actually seen. The `bird_id` is a [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) to the ID in the `birds` table, and the `route_id` points to the ID of the `routes` table.
-
+
Each of these tables were populated using a script that parsed [the CSV files available for download](https://www.sciencebase.gov/catalog/item/52b1dfa8e4b0d9b325230cd9) and added the data using [`INSERT`]({% link {{ page.version.version }}/insert.md %}) statements. For the `routes` table, once again the `ST_MakePoint` function was used to create a geometry from the lon/lat values in the CSV as follows:
diff --git a/src/current/v25.3/st_contains.md b/src/current/v25.3/st_contains.md
index da76c96dc07..902bcf78b24 100644
--- a/src/current/v25.3/st_contains.md
+++ b/src/current/v25.3/st_contains.md
@@ -53,7 +53,7 @@ SELECT ST_Contains(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95
(1 row)
~~~
-
+
### False
@@ -75,7 +75,7 @@ SELECT st_contains(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_convexhull.md b/src/current/v25.3/st_convexhull.md
index 4a22d3a97ac..c047172aef9 100644
--- a/src/current/v25.3/st_convexhull.md
+++ b/src/current/v25.3/st_convexhull.md
@@ -235,7 +235,7 @@ In this example, we will generate the convex hull of a single geometry. The geo
1. Paste the JSON emitted in the previous step into [geojson.io](http://geojson.io) and you should see an image like the following, which shows the convex hull surrounding the locations of [most of the independent bookstores in New York State](https://www.bookweb.org/member_directory/search/ABAmember/results/0/0/ny/0):
-
+
1. Finally, drop the temporary table if you no longer need it:
diff --git a/src/current/v25.3/st_coveredby.md b/src/current/v25.3/st_coveredby.md
index 7983d4c24bd..20a3e2d0095 100644
--- a/src/current/v25.3/st_coveredby.md
+++ b/src/current/v25.3/st_coveredby.md
@@ -48,7 +48,7 @@ SELECT ST_CoveredBy(st_geomfromtext('SRID=4326;POLYGON((-87.623177 41.881832, -9
(1 row)
~~~
-
+
### False
@@ -68,7 +68,7 @@ SELECT ST_CoveredBy(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -9
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_covers.md b/src/current/v25.3/st_covers.md
index 4bfb6dedc16..4b0dfeb46c9 100644
--- a/src/current/v25.3/st_covers.md
+++ b/src/current/v25.3/st_covers.md
@@ -50,7 +50,7 @@ SELECT ST_Covers(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.9
(1 row)
~~~
-
+
### False
@@ -70,7 +70,7 @@ SELECT ST_Covers(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.9
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_disjoint.md b/src/current/v25.3/st_disjoint.md
index b794f594b46..728ab81e861 100644
--- a/src/current/v25.3/st_disjoint.md
+++ b/src/current/v25.3/st_disjoint.md
@@ -49,7 +49,7 @@ SELECT st_disjoint(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95
(1 row)
~~~
-
+
### False
@@ -69,7 +69,7 @@ SELECT st_disjoint(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_equals.md b/src/current/v25.3/st_equals.md
index cfc566ba261..60d5a77546a 100644
--- a/src/current/v25.3/st_equals.md
+++ b/src/current/v25.3/st_equals.md
@@ -45,7 +45,7 @@ SELECT st_equals(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.9
(1 row)
~~~
-
+
### False
@@ -65,7 +65,7 @@ SELECT st_equals(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.9
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_intersects.md b/src/current/v25.3/st_intersects.md
index 89d38cc5565..cde54438b70 100644
--- a/src/current/v25.3/st_intersects.md
+++ b/src/current/v25.3/st_intersects.md
@@ -46,7 +46,7 @@ SELECT st_intersects(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -
(1 row)
~~~
-
+
### False
@@ -66,7 +66,7 @@ SELECT st_intersects(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_overlaps.md b/src/current/v25.3/st_overlaps.md
index 47a911ab55d..1da1d60011e 100644
--- a/src/current/v25.3/st_overlaps.md
+++ b/src/current/v25.3/st_overlaps.md
@@ -47,7 +47,7 @@ SELECT st_overlaps(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95
(1 row)
~~~
-
+
### False
@@ -67,7 +67,7 @@ SELECT st_overlaps(st_geomfromtext('SRID=4326;POLYGON((-79.995888 40.440624,-74.
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_touches.md b/src/current/v25.3/st_touches.md
index d579f0f432d..7099df34822 100644
--- a/src/current/v25.3/st_touches.md
+++ b/src/current/v25.3/st_touches.md
@@ -47,7 +47,7 @@ SELECT st_touches(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.
(1 row)
~~~
-
+
### False
@@ -67,7 +67,7 @@ SELECT st_touches(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/st_union.md b/src/current/v25.3/st_union.md
index df4a9429bc1..9ee0110f404 100644
--- a/src/current/v25.3/st_union.md
+++ b/src/current/v25.3/st_union.md
@@ -236,7 +236,7 @@ In this example, we will generate a single geometry from many individual points
1. Paste the JSON emitted in the previous step into [geojson.io](http://geojson.io) and you should see an image like the following, which shows the location of [most of the independent bookstores in New York State](https://www.bookweb.org/member_directory/search/ABAmember/results/0/0/ny/0):
-
+
1. Finally, drop the temporary table if you no longer need it:
diff --git a/src/current/v25.3/st_within.md b/src/current/v25.3/st_within.md
index cad89f59c56..803e707e249 100644
--- a/src/current/v25.3/st_within.md
+++ b/src/current/v25.3/st_within.md
@@ -53,7 +53,7 @@ SELECT ST_Within(st_geomfromtext('SRID=4326;POLYGON((-87.623177 41.881832, -90.1
(1 row)
~~~
-
+
### False
@@ -73,7 +73,7 @@ SELECT ST_Within(st_geomfromtext('SRID=4326;POLYGON((-87.906471 43.038902, -95.9
(1 row)
~~~
-
+
## See also
diff --git a/src/current/v25.3/start-a-local-cluster-in-docker-windows.md b/src/current/v25.3/start-a-local-cluster-in-docker-windows.md
index 5d704a7b5d7..a6e6ffe21c9 100644
--- a/src/current/v25.3/start-a-local-cluster-in-docker-windows.md
+++ b/src/current/v25.3/start-a-local-cluster-in-docker-windows.md
@@ -300,7 +300,7 @@ The [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) gives you
1. On the [**Cluster Overview**]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}), notice that three nodes are live, with an identical replica count on each node:
-
+
This demonstrates CockroachDB's [automated replication]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) of data via the Raft consensus protocol.
@@ -310,7 +310,7 @@ The [DB Console]({% link {{ page.version.version }}/ui-overview.md %}) gives you
1. Click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:
-
+
1. Use the [**Databases**]({% link {{ page.version.version }}/ui-databases-page.md %}), [**Statements**]({% link {{ page.version.version }}/ui-statements-page.md %}), and [**Jobs**]({% link {{ page.version.version }}/ui-jobs-page.md %}) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.
1. Optionally verify that DB Console instances for `roach2` and `roach3` are reachable on ports 8081 and 8082 and show the same information as port 8080.
@@ -321,7 +321,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. On the [**Cluster Overview**]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}), notice that three nodes are live, with an identical replica count on each node:
-
+
This demonstrates CockroachDB's [automated replication]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) of data via the Raft consensus protocol.
@@ -331,7 +331,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. Click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:
-
+
1. Use the [**Databases**]({% link {{ page.version.version }}/ui-databases-page.md %}), [**Statements**]({% link {{ page.version.version }}/ui-statements-page.md %}), and [**Jobs**]({% link {{ page.version.version }}/ui-jobs-page.md %}) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.
diff --git a/src/current/v25.3/start-a-local-cluster.md b/src/current/v25.3/start-a-local-cluster.md
index a38f33f08a4..e6d28a0b105 100644
--- a/src/current/v25.3/start-a-local-cluster.md
+++ b/src/current/v25.3/start-a-local-cluster.md
@@ -43,7 +43,7 @@ This section shows how to start a cluster interactively. In production, operator
{{site.data.alerts.callout_info}}
The `--background` flag is not recommended. If you decide to start nodes in the background, you must also pass the `--pid-file` argument. To stop a `cockroach` process running in the background, extract the process ID from the PID file and pass it to the command to [stop the node](#step-7-stop-the-cluster).
- In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd).
+ In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}?filters=systemd).
{{site.data.alerts.end}}
You'll see a message like the following:
@@ -254,7 +254,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. On the [**Cluster Overview**]({% link {{ page.version.version }}/ui-cluster-overview-page.md %}), notice that three nodes are live, with an identical replica count on each node:
-
+
This demonstrates CockroachDB's [automated replication]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) of data via the Raft consensus protocol.
@@ -264,7 +264,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. Click [**Metrics**]({% link {{ page.version.version }}/ui-overview-dashboard.md %}) to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:
-
+
1. Use the [**Databases**]({% link {{ page.version.version }}/ui-databases-page.md %}), [**Statements**]({% link {{ page.version.version }}/ui-statements-page.md %}), and [**Jobs**]({% link {{ page.version.version }}/ui-jobs-page.md %}) pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.
@@ -294,7 +294,7 @@ The CockroachDB [DB Console]({% link {{ page.version.version }}/ui-overview.md %
1. In the DB Console, despite one node being "suspect", notice the continued SQL traffic:
-
+
1. Go to the terminal window for `node3` and restart it:
@@ -338,7 +338,7 @@ Adding capacity is as simple as starting more nodes with `cockroach start`.
1. In the DB Console **Cluster Overview** page, confirm that the cluster now has five nodes.
-
+
At first, the replica count will be lower for `node4` and `node5`. Very soon, however, you'll see those numbers even out across all nodes, indicating that data is being [automatically rebalanced]({% link {{ page.version.version }}/demo-replication-and-rebalancing.md %}) to utilize the additional capacity of the new nodes.
diff --git a/src/current/v25.3/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md b/src/current/v25.3/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md
index 73f460242ac..483b748c244 100644
--- a/src/current/v25.3/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md
+++ b/src/current/v25.3/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md
@@ -331,11 +331,11 @@ Move to the terminal window in which you started the Kafka consumer. As the chan
You can also view the messages for your cluster in the Confluent Cloud console in the **Topics** sidebar under the **Messages** tab.
-
+
You can use the **Schema** tab to view the schema for a specific topic.
-
+
## See also
diff --git a/src/current/v25.3/take-locality-restricted-backups.md b/src/current/v25.3/take-locality-restricted-backups.md
index 8d46625ab4a..a10a11297d1 100644
--- a/src/current/v25.3/take-locality-restricted-backups.md
+++ b/src/current/v25.3/take-locality-restricted-backups.md
@@ -55,7 +55,7 @@ The following diagram shows a CockroachDB cluster where each of the nodes can co
Instead, Node 3's locality does match the backup job's `EXECUTION LOCALITY`. Replicas that match a backup job's locality designation and hold the backup job's row data will begin reading and exporting to cloud storage.
-
+
To execute the backup only on nodes in the same region as the cloud storage location, you can specify [locality filters]({% link {{ page.version.version }}/cockroach-start.md %}#locality) that a node must match to take part in the backup job's execution.
@@ -81,7 +81,7 @@ Sometimes the execution of backup jobs can consume considerable resources when r
This diagram shows a CockroachDB cluster in four regions. The node used to run the backup job was configured with [non-voting replicas]({% link {{ page.version.version }}/architecture/replication-layer.md %}#non-voting-replicas) to provide low-latency reads. The node in this region will complete the backup job coordination and data export to cloud storage.
-
+
For details, refer to:
diff --git a/src/current/v25.3/topology-basic-production.md b/src/current/v25.3/topology-basic-production.md
index 0c3d51afda2..e93d0a6207b 100644
--- a/src/current/v25.3/topology-basic-production.md
+++ b/src/current/v25.3/topology-basic-production.md
@@ -17,7 +17,7 @@ If you haven't already, [review the full range of topology patterns]({% link {{
## Configuration
-
+
1. Provision hardware as follows:
- 1 region with 3 AZs
@@ -56,7 +56,7 @@ For example, in the animation below:
1. The leaseholder retrieves the results and returns to the gateway node.
1. The gateway node returns the results to the client.
-
+
#### Writes
@@ -72,17 +72,17 @@ For example, in the animation below:
1. The leaseholders then return acknowledgement of the commit to the gateway node.
1. The gateway node returns the acknowledgement to the client.
-
+
### Resiliency
Because each range is balanced across AZs, one AZ can fail without interrupting access to any data:
-
+
However, if an additional AZ fails at the same time, the ranges that lose consensus become unavailable for reads and writes:
-
+
## See also
diff --git a/src/current/v25.3/topology-development.md b/src/current/v25.3/topology-development.md
index 3bf9d21906a..748a999305c 100644
--- a/src/current/v25.3/topology-development.md
+++ b/src/current/v25.3/topology-development.md
@@ -17,7 +17,7 @@ If you haven't already, [review the full range of topology patterns]({% link {{
## Configuration
-
+
For this pattern, you can either [run CockroachDB locally]({% link {{ page.version.version }}/start-a-local-cluster.md %}) or [deploy a single-node cluster on a cloud VM]({% link {{ page.version.version }}/manual-deployment.md %}).
@@ -27,13 +27,13 @@ For this pattern, you can either [run CockroachDB locally]({% link {{ page.versi
With the CockroachDB node in the same region as your client, and without the overhead of replication, both read and write latency are very low:
-
+
### Resiliency
In a single-node cluster, CockroachDB does not replicate data and, therefore, is not resilient to failures. If the machine where the node is running fails, or if the region or availability zone containing the machine fails, the cluster becomes unavailable:
-
+
## See also
diff --git a/src/current/v25.3/topology-follow-the-workload.md b/src/current/v25.3/topology-follow-the-workload.md
index 431299750f3..86aa81e2d6c 100644
--- a/src/current/v25.3/topology-follow-the-workload.md
+++ b/src/current/v25.3/topology-follow-the-workload.md
@@ -32,7 +32,7 @@ Note that if you start using the [multi-region SQL abstractions]({% link {{ page
Aside from [deploying a cluster across three regions](#cluster-setup) properly, with each node started with the [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag specifying its region and zone combination, this behavior requires no extra configuration. CockroachDB will balance the replicas for a table across the three regions and will assign the range lease to the replica in the region with the greatest demand at any given time (the follow-the-workload feature). This means that read latency in the active region will be low while read latency in other regions will be higher due to having to leave the region to reach the leaseholder. Write latency will be higher as well due to always involving replicas in multiple regions.
-
+
{{site.data.alerts.callout_info}}
Follow-the-workload is also used by [system ranges containing important internal data]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-system-range).
@@ -54,7 +54,7 @@ For example, in the animation below, the most active region is `us-east` and, th
1. The leaseholder retrieves the results and returns to the gateway node.
1. The gateway node returns the results to the client. In this case, reads in the `us-east` remain in the region and are lower latency than reads in other regions.
-
+
#### Writes
@@ -70,17 +70,17 @@ For example, in the animation below, assuming the most active region is still `u
1. The leaseholders then return acknowledgement of the commit to the gateway node.
1. The gateway node returns the acknowledgement to the client.
-
+
### Resiliency
Because this pattern balances the replicas for the table across regions, one entire region can fail without interrupting access to the table:
-
+
{% comment %} However, if an additional machine holding a replica for the table fails at the same time as the region failure, the range to which the replica belongs becomes unavailable for reads and writes:
-
{% endcomment %}
+
{% endcomment %}
## See also
diff --git a/src/current/v25.3/topology-follower-reads.md b/src/current/v25.3/topology-follower-reads.md
index 362a299e49f..ac218e0bb6b 100644
--- a/src/current/v25.3/topology-follower-reads.md
+++ b/src/current/v25.3/topology-follower-reads.md
@@ -30,7 +30,7 @@ If reads can use stale data, use [stale follower reads]({% link {{ page.version.
With each node started with the [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag specifying its region and zone combination, CockroachDB will balance the replicas for a table across the three regions:
-
+
### Summary
@@ -108,7 +108,7 @@ For example, in the following diagram:
1. The replica retrieves the results as of your preferred staleness interval in the past and returns to the gateway node.
1. The gateway node returns the results to the client.
-
+
#### Writes
@@ -124,13 +124,13 @@ For example, in the following animation:
1. The leaseholder then returns acknowledgement of the commit to the gateway node.
1. The gateway node returns the acknowledgement to the client.
-
+
### Resiliency
Because this pattern balances the replicas for the table across regions, one entire region can fail without interrupting access to the table:
-
+
## See also
diff --git a/src/current/v25.3/troubleshoot-lock-contention.md b/src/current/v25.3/troubleshoot-lock-contention.md
index 6e4dfe7f0f6..830d7fc3459 100644
--- a/src/current/v25.3/troubleshoot-lock-contention.md
+++ b/src/current/v25.3/troubleshoot-lock-contention.md
@@ -207,18 +207,18 @@ This step assumes you have already run the SQL statements from [Example 1](#exam
After executing the transactions in the [previous section](#step-1-understand-lock-contention), open the [DB Console](#db-console) for the demo cluster. Navigate to the **Insights** page and select **Workload Insights** > **Transactions Executions**.
-
+
Depending on when you [executed the transactions](#example-1), to display the transactions flagged with insights, you may have to select a longer time interval, such as **Past 6 Hours**.
-
+
With an adequate time interval, two [**High Contention**]({% link {{ page.version.version }}/ui-insights-page.md %}#high-contention) insights will be listed for [Example 1](#example-1):
- **Transaction 2**
- **Transaction 3**
-
+
### Waiting statement
@@ -226,21 +226,21 @@ To identify the exact statement in the transaction that experienced high content
On the **Transaction Execution** page, navigate to the **Statement Executions** tab. In the list of statement executions, in the **Insights** column for `SELECT * FROM t where k = _`, there should be the **High Contention** insight. In [Example 1](#example-1), *Transaction 2* had one statement (other than `SHOW database`). In a transaction with multiple statements, use this page to pinpoint the exact statement that experienced high contention.
-
+
### Blocking transaction
To identify the transaction that blocked **Transaction 2** and caused it to experience high contention, navigate back to the **Overview** tab.
-
+
Scroll to the bottom of the Overview tab to the **Transaction with ID ... waited on** section that gives information about the blocking transaction.
-
+
For more information about the blocking transaction, click the **Transaction Fingerprint ID** to open the [**Transaction Details** page]({% link {{ page.version.version }}/ui-transactions-page.md %}#transaction-details-page).
-
+
### Additional practice
diff --git a/src/current/v25.3/ui-cdc-dashboard.md b/src/current/v25.3/ui-cdc-dashboard.md
index e2c0ee5131b..fb1ad0ec771 100644
--- a/src/current/v25.3/ui-cdc-dashboard.md
+++ b/src/current/v25.3/ui-cdc-dashboard.md
@@ -23,7 +23,7 @@ The **Changefeeds** dashboard displays the following time series graphs:
This graph displays the status of all running changefeeds.
-
+
Metric | Description
--------|----
@@ -39,7 +39,7 @@ In the case of a failed changefeed, you may want to use the [`cursor`]({% link {
This graph displays the 99th, 90th, and 50th percentile of commit latency for running changefeeds. This is the difference between an event's MVCC timestamp and the time it was acknowledged as received by the [downstream sink]({% link {{ page.version.version }}/changefeed-sinks.md %}).
-
+
If the sink batches events, then the difference between the oldest event in the batch and acknowledgement is recorded. Latency during backfill is excluded.
@@ -51,7 +51,7 @@ This graph shows the number of bytes emitted by CockroachDB into the changefeed'
In v23.1 and earlier, the **Emitted Bytes** graph was named **Sink Byte Traffic**. If you want to customize charts, including how metrics are named, use the [**Custom Chart** debug page]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}).
{{site.data.alerts.end}}
-
+
Metric | Description
--------|----
@@ -64,7 +64,7 @@ This graph displays data relating to the number of messages and flushes at the c
- The number of messages that CockroachDB sent to the sink.
- The number of flushes that the sink performed for changefeeds.
-
+
Metric | Description
--------|----
@@ -79,7 +79,7 @@ This graph displays the most any changefeed's persisted [checkpoint]({% link {{
In v23.1 and earlier, the **Max Checkpoint Latency** graph was named **Max Changefeed Latency**. If you want to customize charts, including how metrics are named, use the [**Custom Chart** debug page]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}).
{{site.data.alerts.end}}
-
+
{{site.data.alerts.callout_info}}
The maximum checkpoint latency is distinct from, and slower than, the commit latency for individual change messages. For more information about resolved timestamps, refer to the [Changefeed Messages]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-messages) page.
@@ -89,7 +89,7 @@ The maximum checkpoint latency is distinct from, and slower than, the commit lat
This graph displays the number of times changefeeds restarted due to [retryable errors]({% link {{ page.version.version }}/monitor-and-debug-changefeeds.md %}#changefeed-retry-errors).
-
+
Metric | Description
--------|----
@@ -99,7 +99,7 @@ Metric | Description
This graph displays the oldest [protected timestamp]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) of any running changefeed on the cluster.
-
+
Metric | Description
--------|----
@@ -109,7 +109,7 @@ Metric | Description
This graph displays the number of ranges being backfilled that are yet to enter the changefeed pipeline. An [initial scan]({% link {{ page.version.version }}/create-changefeed.md %}#initial-scan) or [schema change]({% link {{ page.version.version }}/online-schema-changes.md %}) can cause a backfill.
-
+
Metric | Description
--------|----
@@ -119,7 +119,7 @@ Metric | Description
This graph displays the rate of schema registration requests made by CockroachDB nodes to a configured schema registry endpoint. For example, a [Kafka sink]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) pointing to a [Confluent Schema Registry]({% link {{ page.version.version }}/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md %}).
-
+
Metric | Description
--------|----
@@ -129,7 +129,7 @@ Metric | Description
This graph displays the total number of ranges with an active [rangefeed]({% link {{ page.version.version }}/create-and-configure-changefeeds.md %}#enable-rangefeeds) that is performing a catchup scan.
-
+
Metric | Description
--------|----
@@ -139,7 +139,7 @@ Metric | Description
This graph displays the duration of catchup scans that changefeeds are performing.
-
+
Metric | Description
--------|----
diff --git a/src/current/v25.3/ui-cluster-overview-page.md b/src/current/v25.3/ui-cluster-overview-page.md
index 57c4c816de9..02a68ce5816 100644
--- a/src/current/v25.3/ui-cluster-overview-page.md
+++ b/src/current/v25.3/ui-cluster-overview-page.md
@@ -18,7 +18,7 @@ Enable the [Node Map](#node-map) view for a visual representation of your cluste
Use the **Cluster Overview** panel to quickly assess the capacity and health of your cluster.
-
+
Metric | Description
--------|----
@@ -110,7 +110,7 @@ The **Node Map** visualizes the geographical configuration of your cluster. It r
For guidance on enabling and configuring the node map, see [Enable the Node Map]({% link {{ page.version.version }}/enable-node-map.md %}).
-
+
The Node Map uses the longitude and latitude of each locality to position the components on the map. The map is populated with [**locality components**](#locality-component) and [**node components**](#node-component).
@@ -122,7 +122,7 @@ The map shows the components for the highest-level locality tier (e.g., region).
For details on how **Capacity Usage** is calculated, see [Capacity metrics](#capacity-metrics).
-
+
{{site.data.alerts.callout_info}}
On multi-core systems, the displayed CPU usage can be greater than 100%. Full utilization of 1 core is considered as 100% CPU usage. If you have _n_ cores, then CPU usage can range from 0% (indicating an idle system) to (_n_ * 100)% (indicating full utilization).
@@ -136,7 +136,7 @@ Node components are accessed by clicking on the **Node Count** of the lowest-lev
For details on how **Capacity Usage** is calculated, see [Capacity metrics](#capacity-metrics).
-
+
{{site.data.alerts.callout_info}}
On multi-core systems, the displayed CPU usage can be greater than 100%. Full utilization of 1 core is considered as 100% CPU usage. If you have _n_ cores, then CPU usage can range from 0% (indicating an idle system) to (_n_ * 100)% (indicating full utilization).
diff --git a/src/current/v25.3/ui-custom-chart-debug-page.md b/src/current/v25.3/ui-custom-chart-debug-page.md
index 0d32554b01b..c8cff4702af 100644
--- a/src/current/v25.3/ui-custom-chart-debug-page.md
+++ b/src/current/v25.3/ui-custom-chart-debug-page.md
@@ -13,7 +13,7 @@ To view the Custom Chart page, [access the DB Console]({% link {{ page.version.v
## Use the Custom Chart page
-
+
On the **Custom Chart** page, you can set the time span for all charts, add new custom charts, and customize each chart:
@@ -28,7 +28,7 @@ On the **Custom Chart** page, you can set the time span for all charts, add new
### Query user and system CPU usage
-
+
To compare system vs. userspace CPU usage, select the following values under **Metric Name**:
diff --git a/src/current/v25.3/ui-key-visualizer.md b/src/current/v25.3/ui-key-visualizer.md
index 2f35ae23eeb..1c65d3cb95a 100644
--- a/src/current/v25.3/ui-key-visualizer.md
+++ b/src/current/v25.3/ui-key-visualizer.md
@@ -39,7 +39,7 @@ Once you have enabled the Key Visualizer, CockroachDB will begin monitoring keys
When navigating to the **Key Visualizer** page in the DB Console, after a brief loading time, CockroachDB will present the collected data in a visualization designed to help you see data traffic trends at a glance.
-
+
The Key Visualizer presents the following information:
@@ -82,7 +82,7 @@ The Key Visualizer was designed to make potentially problematic ranges stand out
The following image shows the Key Visualizer highlighting a series of [hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}): ranges with much higher-than-average write rates as compared to the rest of the cluster.
-
+
**Remediation:** If you've identified a potentially-problematic range as a hotspot, follow the recommended best practices to [reduce hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}#reduce-hotspots). In the case of the screenshot above, the increased write cadence is due to a series of [range splits]({% link {{ page.version.version }}/architecture/distribution-layer.md %}#range-splits), where a range experiencing a large volume of incoming writes is splitting its keyspace to accommodate the growing range. This is often part of normal operation, but can be indicative of a data modeling issue if the range split is unexpected or causing cluster performance issues.
@@ -90,7 +90,7 @@ The following image shows the Key Visualizer highlighting a series of [hotspots]
The following image shows the Key Visualizer highlighting a [full-table scan]({% link {{ page.version.version }}/make-queries-fast.md %}), where the lack of an appropriate index causes the query planner to need to scan the entire table to find the requested records in a query. This can be seen most clearly by the cascading series of bright red ranges that proceed in diagonal fashion to each other, such as the series of three shown at the mouse cursor. This cascade represents the sequential scan of contiguous ranges in the keyspace as the query planner attempts to locate requested data without an index.
-
+
**Remediation:** If you've identified a full table scan, follow the guidance to [optimize statement performance]({% link {{ page.version.version }}/make-queries-fast.md %}). You can also [analyze your queries with `EXPLAIN`]({% link {{ page.version.version }}/sql-tuning-with-explain.md %}) to investigate if an index was used in the execution of the query.
diff --git a/src/current/v25.3/ui-network-latency-page.md b/src/current/v25.3/ui-network-latency-page.md
index eccc8b1f443..3d6b60662bd 100644
--- a/src/current/v25.3/ui-network-latency-page.md
+++ b/src/current/v25.3/ui-network-latency-page.md
@@ -20,7 +20,7 @@ Select **Collapse Nodes** to display the mean latencies of each locality, depend
Each cell in the matrix displays the round-trip latency in milliseconds between two nodes in your cluster. Round-trip latency includes the return time of a packet. Latencies are color-coded by their standard deviation from the mean latency on the network: green for lower values, and blue for higher. Nodes with the lowest latency are displayed in darker green, and nodes with the highest latency are displayed in darker blue.
-
+
Rows represent origin nodes, and columns represent destination nodes. Hover over a cell to display more details:
@@ -34,7 +34,7 @@ On a [typical multi-region cluster]({% link {{ page.version.version }}/demo-low-
For instance, the cluster shown above has nodes in `us-west1`, `us-east1`, and `europe-west2`. Latencies are highest between nodes in `us-west1` and `europe-west2`, which span the greatest distance. This is especially clear when sorting by region or availability zone and collapsing nodes:
-
+
### No connections
@@ -45,7 +45,7 @@ Nodes that have completely lost connectivity are color-coded depending on connec
This information can help you diagnose a network partition in your cluster.
-
+
Hover over a cell to display more details:
diff --git a/src/current/v25.3/ui-physical-cluster-replication-dashboard.md b/src/current/v25.3/ui-physical-cluster-replication-dashboard.md
index 409d791f304..3ed3cb9193f 100644
--- a/src/current/v25.3/ui-physical-cluster-replication-dashboard.md
+++ b/src/current/v25.3/ui-physical-cluster-replication-dashboard.md
@@ -21,7 +21,7 @@ The **Physical Cluster Replication** dashboard displays the following time-serie
## Logical bytes
-
+
The **Logical Bytes** graph displays the throughput of the replicated bytes. The graph displays the rate at which the logical bytes (sum of keys + values) are ingested by all replication jobs.
@@ -36,7 +36,7 @@ When you [start a replication stream]({% link {{ page.version.version }}/set-up-
## Replication lag
-
+
The **Replication Lag** graph displays the [replication lag]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}) between the primary and standby cluster. This is the time between the most up-to-date replicated time and the actual time.
diff --git a/src/current/v25.3/ui-queues-dashboard.md b/src/current/v25.3/ui-queues-dashboard.md
index 0cb96401158..1a300547eb9 100644
--- a/src/current/v25.3/ui-queues-dashboard.md
+++ b/src/current/v25.3/ui-queues-dashboard.md
@@ -171,7 +171,7 @@ Pending Actions | The number of pending replicas in the time series maintenance
## MVCC GC Queue
-
+
The **MVCC GC Queue** graph displays various details about the health and performance of the [garbage collection]({% link {{ page.version.version }}/architecture/storage-layer.md %}#garbage-collection) queue.
@@ -184,7 +184,7 @@ Pending Actions | The number of pending replicas in the [garbage collection]({%
## Protected Timestamp Records
-
+
The **Protected Timestamp Records** graph displays the number of [protected timestamp]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) records (used by backups, changefeeds, etc. to prevent MVCC GC) per node, as tracked by the `spanconfig.kvsubscriber.protected_record_count` metric.
diff --git a/src/current/v25.3/ui-replication-dashboard.md b/src/current/v25.3/ui-replication-dashboard.md
index 353af4963e7..5e41469377b 100644
--- a/src/current/v25.3/ui-replication-dashboard.md
+++ b/src/current/v25.3/ui-replication-dashboard.md
@@ -31,7 +31,7 @@ The **Replication** dashboard displays the following time series graphs:
## Ranges
-
+
The **Ranges** graph shows you various details about the status of ranges.
@@ -52,7 +52,7 @@ Under-replicated | The number of under-replicated ranges. Non-voting replicas ar
## Logical Bytes per Store
-
+
Metric | Description
--------|--------
@@ -64,7 +64,7 @@ Metric | Description
## Replicas Per Store
-
+
- In the node view, the graph shows the number of range replicas on the store.
@@ -74,7 +74,7 @@ You can [Replication Controls]({% link {{ page.version.version }}/configure-repl
## Replica Quiescence
-
+
- In the node view, the graph shows the number of replicas on the node.
@@ -104,7 +104,7 @@ Load-based Range Rebalances | `rebalancing.range.rebalances` | Number of range r
## Snapshots
-
+
Usually the nodes in a [Raft group]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) stay synchronized by following along with the log message by message. However, if a node is far enough behind the log (e.g., if it was offline or is a new node getting up to speed), rather than send all the individual messages that changed the range, the cluster can send it a snapshot of the range and it can start following along from there. Commonly this is done preemptively, when the cluster can predict that a node will need to catch up, but occasionally the Raft protocol itself will request the snapshot.
@@ -118,7 +118,7 @@ Reserved | The number of slots reserved per second for incoming snapshots that w
## Snapshot Data Received
-
+
The **Snapshot Data Received** graph shows the rate per second of data received in bytes by each node via [Raft snapshot transfers]({% link {{ page.version.version }}/architecture/replication-layer.md %}#snapshots). Data is split into recovery and rebalancing snapshot data received: recovery includes all upreplication due to decommissioning or node failure, and rebalancing includes all other snapshot data received.
@@ -131,7 +131,7 @@ Metric | Description
## Receiver Snapshots Queued
-
+
The **Receiver Snapshots Queued** graph shows the number of [Raft snapshot transfers]({% link {{ page.version.version }}/architecture/replication-layer.md %}#snapshots) queued to be applied on a receiving node, which can only accept one snapshot at a time per store.
@@ -143,7 +143,7 @@ Metric | Description
## Circuit Breaker Tripped Replicas
-
+
When individual ranges become temporarily unavailable, requests to those ranges are refused by a [per-replica circuit breaker]({% link {{ page.version.version }}/architecture/replication-layer.md %}#per-replica-circuit-breaker-overview) instead of hanging indefinitely.
@@ -159,7 +159,7 @@ Metric | Description
## Circuit Breaker Tripped Events
-
+
When individual ranges become temporarily unavailable, requests to those ranges are refused by a [per-replica circuit breaker]({% link {{ page.version.version }}/architecture/replication-layer.md %}#per-replica-circuit-breaker-overview) instead of hanging indefinitely. While a range's per-replica circuit breaker remains tripped, each incoming request to that range triggers a `ReplicaUnavailableError` event until the range becomes available again.
@@ -173,7 +173,7 @@ Metric | Description
## Replicate Queue Actions: Successes
-
+
The **Replicate Queue Actions: Successes** graph shows the rate of various successful replication queue actions per second.
@@ -194,7 +194,7 @@ Decommissioning Replicas Removed / Sec | The number of successful decommissionin
## Replicate Queue Actions: Failures
-
+
The **Replicate Queue Actions: Failures** graph shows the rate of various failed replication queue actions per second.
@@ -215,7 +215,7 @@ Decommissioning Replicas Removed Errors / Sec | The number of failed decommissio
## Decommissioning Errors
-
+
The **Decommissioning Errors** graph shows the rate per second of decommissioning replica replacement failures experienced by the replication queue, by node.
diff --git a/src/current/v25.3/ui-runtime-dashboard.md b/src/current/v25.3/ui-runtime-dashboard.md
index 7e29d3f218b..df55ad38436 100644
--- a/src/current/v25.3/ui-runtime-dashboard.md
+++ b/src/current/v25.3/ui-runtime-dashboard.md
@@ -17,7 +17,7 @@ The **Runtime** dashboard displays the following time series graphs:
## Live Node Count
-
+
In the node view as well as the cluster view, the graph shows the number of live nodes in the cluster.
@@ -25,7 +25,7 @@ A dip in the graph indicates decommissioned nodes, dead nodes, or nodes that are
## Memory Usage
-
+
- In the node view, the graph shows the memory in use for the selected node.
@@ -49,7 +49,7 @@ CGo Total | Total memory managed by the C layer.
## CPU Time
-
+
- In the node view, the graph shows the [CPU time](https://wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations for the selected node.
- In the cluster view, the graph shows the [CPU time](https://wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations across all nodes in the cluster.
@@ -63,7 +63,7 @@ Sys CPU Time | Total CPU seconds per second used for CockroachDB system-level op
## Clock Offset
-
+
- In the node view, the graph shows the mean clock offset of the node against the rest of the cluster.
- In the cluster view, the graph shows the mean clock offset of each node against the rest of the cluster.
diff --git a/src/current/v25.3/ui-schedules-page.md b/src/current/v25.3/ui-schedules-page.md
index a04014d08aa..eeb4496d71e 100644
--- a/src/current/v25.3/ui-schedules-page.md
+++ b/src/current/v25.3/ui-schedules-page.md
@@ -27,7 +27,7 @@ Use the **Schedules** list to see your active and paused schedules.
The following screenshot shows a list of backups and automated statistics compaction schedules:
-
+
Column | Description
---------------------+--------------
@@ -44,7 +44,7 @@ Creation Time (UTC) | The time at which the user originally created the sc
Click on a schedule ID to view the full SQL statement that the schedule runs. For example, the following screenshot shows the resulting [`BACKUP`]({% link {{ page.version.version }}/backup.md %}) statement for a full cluster backup recurring every day:
-
+
You may also view a `protected_timestamp_record` on this page. This indicates that the schedule is actively managing its own [protected timestamp]({% link {{ page.version.version }}/architecture/storage-layer.md %}#protected-timestamps) records independently of [GC TTL]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds). See [Protected timestamps and scheduled backups]({% link {{ page.version.version }}/create-schedule-for-backup.md %}#protected-timestamps-and-scheduled-backups) for more detail.
diff --git a/src/current/v25.3/ui-ttl-dashboard.md b/src/current/v25.3/ui-ttl-dashboard.md
index 4a9e5e95f7d..f32d1e550db 100644
--- a/src/current/v25.3/ui-ttl-dashboard.md
+++ b/src/current/v25.3/ui-ttl-dashboard.md
@@ -19,7 +19,7 @@ The **TTL** dashboard displays the following time series graphs:
You can monitor the **Processing Rate** graph to see how many rows per second are being processed by [TTL jobs]({% link {{ page.version.version }}/row-level-ttl.md %}#view-running-ttl-jobs).
-
+
| Metric | Description |
|---------------+---------------------------------------------|
@@ -30,7 +30,7 @@ You can monitor the **Processing Rate** graph to see how many rows per second ar
Monitor the **Estimated Rows** graph to see approximately how many rows are on the TTL table.
-
+
| Metric | Description |
|------------------------------------+-----------------------------------------------------------------|
@@ -41,13 +41,13 @@ Monitor the **Estimated Rows** graph to see approximately how many rows are on t
Monitor the **Job Latency** graph to see the latency of scanning and deleting within your cluster's [TTL jobs]({% link {{ page.version.version }}/row-level-ttl.md %}#view-running-ttl-jobs).
-
+
## Ranges in Progress
Monitor the **Ranges in Progress** graph to see the number of ranges currently being processed by [TTL jobs]({% link {{ page.version.version }}/row-level-ttl.md %}#view-running-ttl-jobs).
-
+
| Metric | Description |
|----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------|
diff --git a/src/current/v25.3/understand-hotspots.md b/src/current/v25.3/understand-hotspots.md
index c664dbdee06..633a6387b2b 100644
--- a/src/current/v25.3/understand-hotspots.md
+++ b/src/current/v25.3/understand-hotspots.md
@@ -35,7 +35,7 @@ All hotspot types described on this page will create hot nodes, as long as the c
The following image is a graph of [CPU Percent]({% link {{ page.version.version }}/ui-hardware-dashboard.md %}#cpu-percent) utilization per node. Most of the nodes hover around 25%, while one hot node is around 95%. Since the hot node keeps changing, it means the hotspot is moving from one node to another as the [ranges]({% link {{ page.version.version }}/architecture/overview.md %}#range) containing writes fill up and split. For more information, refer to [hot range](#hot-range) and [moving hotspot](#moving-hotspot).
-
+
### Hot range
@@ -106,7 +106,7 @@ An _index hotspot_ is a hotspot on an [index]({% link {{ page.version.version }}
Consider a table `users` that contains a [primary key]({% link {{ page.version.version }}/primary-key.md %}) `user_id`, which is an incrementing integer value. Each new key will be the current maximum key + 1. In this way, all writes appear at the index tail. The following image visualizes writes to the `users` table using an incrementing `INT` primary key. Note how all writes are focused at the tail of the index, represented by the red section in Range 4.
-
+
Even if performance degradation in Range 4 is mitigated, the system remains constrained by the number of writes a single range can handle. As a result, CockroachDB could be limited to the performance of a single node, which goes against the purpose of a distributed database.
@@ -116,7 +116,7 @@ In the ideal operation of a distributed SQL database, inserts into an index shou
Consider a table `users` that contains a primary key `user_uuid` of type [`UUID`]({% link {{ page.version.version }}/uuid.md %}). Because `UUID`s are pseudo-random, new rows are inserted into the keyspace at random locations. The following image visualizes writes to the `users` table using a `UUID` primary key. Red lines indicate an insert into the keyspace. Note how the red lines are distributed evenly.
-
+
Inserts are not the only way that index hotspots can occur. Consider the same table `users` that now has a [secondary index]({% link {{ page.version.version }}/schema-design-indexes.md %}) on a `TIMESTAMP` column:
@@ -145,7 +145,7 @@ The resolution of the index hotspot often depends on your requirements for the d
If inserting in sequential order is important, the index itself can be [hash-sharded]({% link {{ page.version.version }}/hash-sharded-indexes.md %}), which means that it is still stored in order, albeit in some number of shards. Consider a `users` table, with a primary key `id INT`, which is hash-sharded with 4 shards, and a hashing function of modulo 4. The following image illustrates this example:
-
+
Now the writes are distributed into the tails of the shards, rather than the tail of the whole index. This benefits write performance but makes reads more challenging. If you need to read a subset of the data, you will have to scan each shard of the index.
@@ -183,7 +183,7 @@ A _queueing hotspot_ is a type of index hotspot that occurs when a workload trea
Queues, such as logs, generally require data to be ordered by write, which necessitates indexing in a way that is likely to create a hotspot. An outbox where data is deleted as it is read has an additional problem: it tends to accumulate an ordered set of [garbage data]({% link {{ page.version.version }}/operational-faqs.md %}#why-is-my-disk-usage-not-decreasing-after-deleting-data) behind the live data. Since the system cannot determine whether any live rows exist within the garbage data, what appears to be a small table scan to the user can actually result in an unexpectedly intensive scan on the garbage data.
-
+
To mitigate this issue, it is advisable to use [Change Data Capture (CDC)]({% link {{ page.version.version }}/cdc-queries.md %}) to ensure subscription to updates instead of using Outbox tables. If using CDC is not possible, sharding the index that the outbox uses for ordering can reduce the likelihood of a hotspot within the cluster.
@@ -220,13 +220,13 @@ UPDATE User SET follower_count = follower_count+1 WHERE id=2;
This simple design works well until it encounters an unexpected surge in activity. For example, consider user 471, who suddenly gains millions of followers within an hour. This sudden increase in followers causes a significant amount of write traffic to the range responsible for this user, which the system may not be able to handle efficiently. The following image visualizes a hot row in the keyspace. Note how writes are focused on a single point, which cannot be split.
-
+
Without changing the default behavior of the system, the load will not be distributed because it needs to be served by a single range. This behavior is not just temporary; certain users may consistently experience a high volume of activity compared to the average user. This can result in a system with multiple hotspots, each of which can potentially overload the system at any moment.
The following image visualizes a keyspace with multiple hot rows. In a large enough cluster, each of these rows can burden the range they live in, leading to multiple burdened nodes.
-
+
### Hot sequence
@@ -248,7 +248,7 @@ Because the primary key index is [hash-sharded]({% link {{ page.version.version
The following image visualizes writes in the `products` keyspace using hash-sharded rows. With five shards, the writes are better distributed into the keyspace, but the `id` sequence row becomes the limiting factor.
-
+
Because sequences avoid user expressions, optimizations can be made to improve their performance, but unfortunately the write volume on the sequence is still that of the sum total of all its accesses.
@@ -278,7 +278,7 @@ country_id UUID REFERENCES countries(id)
SELECT * FROM posts p JOIN countries c ON p.country_id=c.id;
~~~
-
+
Reads in the `posts` table may be evenly distributed, but joining the `countries` table becomes a bottleneck, since it exists in so few ranges. Splitting the `countries` table ranges can relieve pressure, but only to a limit as the indivisible rows experience high throughput. [Global tables]({% link {{ page.version.version }}/global-tables.md %}) and [follower reads]({% link {{ page.version.version }}/follower-reads.md %}) can help scaling in this case, especially when write throughput is low.
@@ -303,7 +303,7 @@ By doing this, you have limited the traffic from the highest throughput table to
The following image visualizes the regional breakout of data in the `orders` table. Because of the domiciling policy, reads and writes to the `orders` table will be focused on the `us-east-1` nodes.
-
+
### Temporal hotspot
diff --git a/src/current/v25.3/wal-failover.md b/src/current/v25.3/wal-failover.md
index d695517a3e5..32e2688cc3c 100644
--- a/src/current/v25.3/wal-failover.md
+++ b/src/current/v25.3/wal-failover.md
@@ -29,7 +29,7 @@ WAL failover uses a secondary disk to fail over WAL writes to when transient dis
The following diagram shows how WAL failover works at a high level. For more information about the WAL, memtables, and SSTables, refer to the [Architecture » Storage Layer documentation]({% link {{ page.version.version }}/architecture/storage-layer.md %}).
-
+
## Create and configure a cluster to be ready for WAL failover
@@ -280,7 +280,7 @@ In [DB Console's **Advanced Debug** page]({% link {{ page.version.version }}/ui-
Set the source of these metrics to be the node where you are running the disk stall/unstall script.
-
+
Notice there is a switchover followed by each stall. The node with the stalled disk continues to perform normal operations during and after WAL failover, as the stalls are transient and shorter than the current value of [`COCKROACH_ENGINE_MAX_SYNC_DURATION_DEFAULT`](#important-environment-variables).
@@ -377,7 +377,7 @@ In a [multi-store](#multi-store-config) cluster, if a disk for a store has a tra
The following diagram shows the behavior of WAL writes during a disk stall with and without WAL failover enabled.
-
+
## FAQs
diff --git a/src/current/v25.3/window-functions.md b/src/current/v25.3/window-functions.md
index b56b3c4e4e5..a042d751204 100644
--- a/src/current/v25.3/window-functions.md
+++ b/src/current/v25.3/window-functions.md
@@ -94,7 +94,7 @@ Its operation can be described as follows (numbered steps listed here correspond
1. The window function `SUM(revenue) OVER ()` operates on a window frame containing all rows of the query output.
1. The window function `SUM(revenue) OVER (PARTITION BY city)` operates on several window frames in turn; each frame contains the `revenue` columns for a different city [partition]({% link {{ page.version.version }}/partitioning.md %}) (Amsterdam, Boston, L.A., etc.).
-
+
### Caveats