Conversation
Rational: In short lived applications (e.g. Kubernetes Jobs) most of the time all traces get lost becasue they are not synced. Users need to call `Close` manually, but as all spans hold a shared_ptr on the tracer it is hard to have the correct place to call it. Calling it in the destructor assures all spans will be included in the flush.
|
@dgoffredo If I understand correctly this sems to be a bug in the integration tests? Edit: Ahh found now the source of the problem in the opentracing_nginx integration. Not sure if I shoudl laugh or cry :D |
|
What it comes down to is a matter of policy about whether we want I'll revisit this next week to present our options. |
|
@dgoffredo ANy news I understrand this changes behaviour. But only were it was undefined before. The dummy span could also before already be sent to agent. It was just very unlikely. Now the behaviour is defined and I think it is therefore an improvment. I think no new behaviour was introduced it was made only reliable. |
|
@DS-Serafin I'm closing this PR as I'm no longer a maintainer of Datadog software and there are now newer alternatives that are supported (dd-trace-cpp, nginx-datadog). Please direct any further questions to @dmehala. |
This is my fiddling with #249.
Flushing the tracer on destruction causes nginx's master process to send a trace within the time the integration test waits for a certain number of traces.
I still haven't figured out how to get the integration tests running on my new machine's Docker setup, so I'll use CI via this pull request instead.