Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ list of commands as built.
| RUM | ✅ | `rum apps`, `rum sessions`, `rum metrics`, `rum retention-filters`, `rum playlists`, `rum heatmaps` | Apps, sessions, metrics, retention filters, replay playlists, heatmaps |
| APM Services | ✅ | `apm services`, `apm entities`, `apm dependencies`, `apm flow-map` | Services stats, operations, resources; entity queries; dependencies; flow visualization |
| Traces | ✅ | `traces search`, `traces aggregate`, `traces metrics` | Span search/aggregation and span-based metric definitions |
| Profiling | | `profiling aggregate`, `profiling analytics`, `profiling timeline`, … | Continuous Profiler queries (requires API + App keys) |
| Profiling | | `profiling` | Not supported in pup yet. Use the Datadog MCP server: https://docs.datadoghq.com/bits_ai/mcp_server. Enable with: https://mcp.datadoghq.com/api/unstable/mcp-server/mcp?toolsets=core,profiling |
| Database Monitoring | ✅ | `dbm samples search` | DBM query sample search |
| Session Replay | ❌ | - | Not yet implemented |

Expand Down
6 changes: 3 additions & 3 deletions docs/COMMANDS.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ pup <domain> <subgroup> <action> [options] # Nested commands
| containers | list, images (list) | src/commands/containers.rs | ✅ |
| costs | datadog (projected, attribution, by-org, aws-config, azure-config, gcp-config), ccm (custom-costs, tag-descriptions, tag-metadata, tags, tag-keys, budgets, commitments) | src/commands/cost.rs, src/commands/cost_ccm.rs | ✅ |
| product-analytics | events send | src/commands/product_analytics.rs | ✅ |
| profiling | aggregate, analysis, analytics, breakdown, callgraph, download, fields, info, list, save-favorite, timeline | src/commands/profiling.rs | |
| profiling | none | n/a | |
| datasets | list, get, create, update, delete | src/commands/datasets.rs | ✅ |
| data-deletion | requests (list, create, cancel) | src/commands/data_deletion.rs | ✅ |
| data-governance | scanner-rules (list) | src/commands/data_governance.rs | ✅ |
Expand All @@ -88,7 +88,7 @@ pup <domain> <subgroup> <action> [options] # Nested commands

**Auth note:** All workflow commands require `DD_API_KEY` + `DD_APP_KEY`. OAuth2 bearer tokens are not supported for workflow operations.

**Auth note (profiling):** All `pup profiling` commands require `DD_API_KEY` + `DD_APP_KEY`. No OAuth2 scope is declared for Continuous Profiler endpoints, so bearer tokens are not supported.
**Profiling note:** `pup profiling` has no subcommands yet. Use the Datadog MCP server instead: https://docs.datadoghq.com/bits_ai/mcp_server. Enable profiling in the MCP toolset with: https://mcp.datadoghq.com/api/unstable/mcp-server/mcp?toolsets=core,profiling

## Common Patterns

Expand Down Expand Up @@ -157,7 +157,7 @@ pup infrastructure hosts list
- **infrastructure** - Host inventory (hosts list, hosts get)
- **network** - Network monitoring (flows list, devices list/get/interfaces/tags, interfaces list/update)
- **tags** - Host tag management (list, get, add, update, delete)
- **profiling** - Continuous Profiler data (aggregate, analysis, analytics, breakdown, callgraph, download, fields, info, list, save-favorite, timeline)
- **profiling** - Placeholder that points users to the Datadog MCP server for profiler data

### Security & Compliance
- **security** - Security monitoring (rules, signals, findings, content-packs, risk-scores)
Expand Down
317 changes: 0 additions & 317 deletions src/commands/cost_ccm.rs
Original file line number Diff line number Diff line change
Expand Up @@ -839,321 +839,4 @@ mod tests {
let _ = std::fs::remove_file(&tmp);
cleanup_env();
}

#[tokio::test]
async fn test_profiling_analytics_rejects_empty_group_by() {
let _lock = lock_env().await;
let s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
// Mock not required: we should fail before hitting the API.
let result = crate::commands::profiling::analytics(
&cfg,
"*".into(),
"15m".into(),
"now".into(),
Some(" , ".into()),
None,
100,
)
.await;
assert!(
result.is_err(),
"expected error for empty --group-by tokens"
);
cleanup_env();
}

#[tokio::test]
async fn test_profiling_fields_ok() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/api/unstable/profiles/interactive-analytics/field")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"data":[]}"#)
.create_async()
.await;
let result = crate::commands::profiling::fields(
&cfg,
"service".into(),
"*".into(),
"15m".into(),
"now".into(),
100,
)
.await;
assert!(result.is_ok(), "fields failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_fields_error() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
s.mock("POST", mockito::Matcher::Any)
.with_status(500)
.create_async()
.await;
let result = crate::commands::profiling::fields(
&cfg,
"service".into(),
"*".into(),
"15m".into(),
"now".into(),
100,
)
.await;
assert!(result.is_err(), "expected error on 500");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_aggregate_ok() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/profiling/api/v1/aggregate")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"flameGraph":[]}"#)
.create_async()
.await;
let result = crate::commands::profiling::aggregate(
&cfg,
"service:web".into(),
"cpu-time".into(),
"1h".into(),
"now".into(),
100,
"sum".into(),
)
.await;
assert!(result.is_ok(), "aggregate failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_aggregate_invalid_time() {
let _lock = lock_env().await;
let s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let result = crate::commands::profiling::aggregate(
&cfg,
"*".into(),
"cpu-time".into(),
"notatime".into(),
"now".into(),
100,
"sum".into(),
)
.await;
assert!(result.is_err(), "expected parse error on invalid --from");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_breakdown_ok_no_filter() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/profiling/api/v1/profiles/pid/breakdown")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"tree":{}}"#)
.create_async()
.await;
let result = crate::commands::profiling::breakdown(&cfg, "pid", None, None, None).await;
assert!(result.is_ok(), "breakdown failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_breakdown_rejects_partial_filter() {
let _lock = lock_env().await;
let s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let result = crate::commands::profiling::breakdown(
&cfg,
"pid",
Some("service:web".into()),
Some("1h".into()),
None,
)
.await;
assert!(result.is_err(), "expected error for partial filter triple");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_timeline_ok() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/profiling/api/v1/profiles/pid/timeline")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"layers":[]}"#)
.create_async()
.await;
let result = crate::commands::profiling::timeline(&cfg, "pid", "eid").await;
assert!(result.is_ok(), "timeline failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_timeline_error() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
s.mock("POST", mockito::Matcher::Any)
.with_status(404)
.create_async()
.await;
let result = crate::commands::profiling::timeline(&cfg, "missing", "eid").await;
assert!(result.is_err(), "expected error on 404");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_callgraph_ok() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/api/unstable/profiles/callgraph")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"nodes":[]}"#)
.create_async()
.await;
let result = crate::commands::profiling::callgraph(
&cfg,
"service:web".into(),
"cpu-time".into(),
"15m".into(),
"now".into(),
100,
)
.await;
assert!(result.is_ok(), "callgraph failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_callgraph_error() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
s.mock("POST", mockito::Matcher::Any)
.with_status(500)
.create_async()
.await;
let result = crate::commands::profiling::callgraph(
&cfg,
"*".into(),
"cpu-time".into(),
"15m".into(),
"now".into(),
100,
)
.await;
assert!(result.is_err(), "expected error on 500");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_save_favorite_ok() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("POST", "/api/unstable/profiles/save-favorite")
.with_status(200)
.with_header("content-type", "application/json")
.with_body(r#"{"queryId":"abc"}"#)
.create_async()
.await;
let result = crate::commands::profiling::save_favorite(
&cfg,
"service:web".into(),
"15m".into(),
"now".into(),
"fav-1".into(),
100,
)
.await;
assert!(result.is_ok(), "save_favorite failed: {:?}", result.err());
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_save_favorite_error() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
s.mock("POST", mockito::Matcher::Any)
.with_status(500)
.create_async()
.await;
let result = crate::commands::profiling::save_favorite(
&cfg,
"*".into(),
"15m".into(),
"now".into(),
"fav-1".into(),
100,
)
.await;
assert!(result.is_err(), "expected error on 500");
cleanup_env();
}

#[tokio::test]
async fn test_profiling_download_to_file() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
let mock = s
.mock("GET", "/api/ui/profiling/profiles/eid/download")
.with_status(200)
.with_header("content-type", "application/octet-stream")
.with_body(b"profile-bytes")
.create_async()
.await;
let tmp = std::env::temp_dir().join(format!("pup-prof-{}.bin", std::process::id()));
let tmp_str = tmp.to_string_lossy().to_string();
let result = crate::commands::profiling::download(&cfg, "eid", Some(tmp_str.clone())).await;
assert!(result.is_ok(), "download failed: {:?}", result.err());
let contents = std::fs::read(&tmp).expect("output file");
assert_eq!(contents, b"profile-bytes");
let _ = std::fs::remove_file(&tmp);
mock.assert_async().await;
cleanup_env();
}

#[tokio::test]
async fn test_profiling_download_error() {
let _lock = lock_env().await;
let mut s = mockito::Server::new_async().await;
let cfg = test_config(&s.url());
s.mock("GET", mockito::Matcher::Any)
.with_status(404)
.create_async()
.await;
let result = crate::commands::profiling::download(&cfg, "missing-eid", None).await;
assert!(result.is_err(), "expected error on 404");
cleanup_env();
}
}
1 change: 0 additions & 1 deletion src/commands/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,6 @@ pub mod on_call;
pub mod organizations;
pub mod processes;
pub mod product_analytics;
pub mod profiling;
pub mod reference_tables;
pub mod rum;
#[cfg(not(target_arch = "wasm32"))]
Expand Down
Loading
Loading