Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
3410918
began transferring camera transform to photoncamera
Bobcat66 Mar 5, 2026
522d464
edited photon-serde message for PipelineResult, added camera transfor…
Bobcat66 Mar 5, 2026
9845216
updated PipelineResult protobuf
Bobcat66 Mar 5, 2026
955d597
fixed photoncamera constructor
Bobcat66 Mar 5, 2026
b27d8af
fixed photoncamerasim
Bobcat66 Mar 5, 2026
a9cb620
added back some PhotonPipelineResult constructors for backwards compa…
Bobcat66 Mar 5, 2026
47272a2
worked on java serde for the new PipelineResult
Bobcat66 Mar 6, 2026
9aa9c33
Hopefully compiles? Added serde code for C++
Bobcat66 Mar 6, 2026
b4180de
fixed photonpipelineresult constructor description
Bobcat66 Mar 6, 2026
8e5ef16
added old constructor for PhotonPipelineResult
Bobcat66 Mar 6, 2026
cd707bc
Hopefully fixed LegacyPhotonPoseEstimatorTest
Bobcat66 Mar 6, 2026
6e3759e
continued work on the c++ version of photonlib, still doesn't compile
Bobcat66 Mar 6, 2026
a780d94
CPP tests compile now i think
Bobcat66 Mar 7, 2026
4b0fd04
fixed cpp packet test
Bobcat66 Mar 7, 2026
0baf428
misc changes, fixed PhotonPipelineResult equality
Bobcat66 Mar 7, 2026
ddf4e1d
updated photon serde to support optional shimmed types
Bobcat66 Mar 7, 2026
b7efcd3
hopefully added optional WPI structs to photonserde
Bobcat66 Mar 7, 2026
4a2c9f0
added optional WPI structs to cpp photon serde
Bobcat66 Mar 7, 2026
6855851
removed robottocameratransform
Bobcat66 Mar 8, 2026
656fde1
istg if this fixes the CI build
Bobcat66 Mar 8, 2026
ac2b083
added python photonpipelineresult
Bobcat66 Mar 8, 2026
cbfdb45
ran wpiformat
Bobcat66 Mar 8, 2026
2b280b7
Merge pull request #1 from Bobcat66/photon-serde-patch
Bobcat66 Mar 8, 2026
d6090a1
Backend plumbing, added Camera position to frames
Bobcat66 Mar 8, 2026
a9e13c3
Merge branch 'main' of https://github.com/Bobcat66/photonvision
Bobcat66 Mar 8, 2026
87edd94
fixed null settables case in FileVisionSource
Bobcat66 Mar 8, 2026
c31b8f1
updated PhotonPoseEstimator
Bobcat66 Mar 9, 2026
63d7e10
updated LegacyPhotonPoseEstimatorTest
Bobcat66 Mar 9, 2026
e786a1d
hopefully fixed the problems in LegacyPhotonPoseEstimatorTest
Bobcat66 Mar 9, 2026
496c996
i fixed it yaaay
Bobcat66 Mar 9, 2026
38f6356
updated C++ PhotonPoseEstimator
Bobcat66 Mar 9, 2026
f7ebc15
ran wpiformat
Bobcat66 Mar 9, 2026
ac00977
updated C++ PhotonCamera NT
Bobcat66 Mar 10, 2026
93e6790
updated C++ and python photonCameras to publish robottocamera transform
Bobcat66 Mar 10, 2026
aed5bc6
fixed bug
Bobcat66 Mar 10, 2026
b91c2b6
fixed photon-docs build error
Bobcat66 Mar 11, 2026
824eb8d
fixed issue where the backend would read NT before photonlib publishe…
Bobcat66 Mar 12, 2026
92c67be
fixed bug in NTDataPublisher
Bobcat66 Mar 12, 2026
39f3989
ran wpiformat
Bobcat66 Mar 12, 2026
5a8b3ce
fixed rNTDataPublisher::onRobotToCameraChange, removed hacky robotToC…
Bobcat66 Mar 12, 2026
7ed34bc
updated photonlibpy pose estimator
Bobcat66 Mar 14, 2026
21c5ee8
made robotToCamera thread-safe
Bobcat66 Mar 25, 2026
3ffbd85
switched to a lockless atomicreference system
Bobcat66 Mar 26, 2026
1e925b7
ran spotlessApply
Bobcat66 Mar 29, 2026
fbcb712
ran wpiformat
Bobcat66 Mar 29, 2026
c7701e9
removed robotToCamera from photonposeestimator entirely
Bobcat66 Mar 29, 2026
b6f70a7
wpiformat
Bobcat66 Mar 29, 2026
970854a
updated cpp poseest example
Bobcat66 Mar 31, 2026
f45dd72
privated FileVisionSource getRobotToCamera()
Bobcat66 Mar 31, 2026
dc2af8f
linting
Bobcat66 Mar 31, 2026
9a67976
refactored backend
Bobcat66 Mar 31, 2026
c9ec8bb
fixed python photonPoseEstimator test
Bobcat66 Apr 9, 2026
bfea884
Merge branch 'issue_2095' into issue_2095_2027_rebase
Bobcat66 Apr 21, 2026
d020a5a
worked on rebase
Bobcat66 Apr 22, 2026
63817e0
wpiformat
Bobcat66 Apr 22, 2026
29b113b
it finally builds again horay
Bobcat66 Apr 22, 2026
3e0b03d
linting
Bobcat66 Apr 22, 2026
72a9d60
switched to using getters in photonposeestimator.java
Bobcat66 Apr 23, 2026
e6c1db9
linting
Bobcat66 Apr 23, 2026
97f4922
Merge branch '2027' into issue_2095_2027_rebase
Bobcat66 Apr 23, 2026
1862c18
added design doc
Bobcat66 Apr 23, 2026
5fc6a2e
linting
Bobcat66 Apr 23, 2026
245797f
added robot-to-camera design doc to index
Bobcat66 Apr 23, 2026
60ac857
moved robotToCamera transform to VisionModule
Bobcat66 Apr 23, 2026
4190a35
added robotToCamera transform buffer with lerp
Bobcat66 Apr 26, 2026
dee905c
updated python and cpp photonlibs
Bobcat66 Apr 26, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ image-rotation
time-sync
camera-matching
e2e-latency
robot-to-camera
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Robot to Camera

## How 3D pose estimation works

At its core, Photonvision's 3D pose estimation is built around solving the Perspective-n-Point (PnP) problem. The PnP problem is essentially, 'given a set of points in 3D space, and their projections on a 2D image, determine the pose of the camera'. In photonvision's case, the points are the corners of one or more apriltags.
However, this leaves us with the camera's pose, *not* the robot's pose. The solution, of course, is to apply an offset. For 3D Pose estimation, Photonvision associates to each camera a `Transform3d` object, representing a vector encoding the 6DOF transformation from the robot (or, rather, the point on the robot considered to be its center) to the camera, and the final step of pose estimation is to transform the camera's pose (that got spit out of the PnP solver) by the inverse of this vector, yielding an estimate of the robot's pose.

## What does the plumbing look like?

The `PhotonCamera` object is (optionally) passed a `Transform3d` object by the Robot code, corresponding to the robot-to-camera transformation. This `Transform3d` (if passed) is transmitted to the coprocessor by the `PhotonCamera`. The coprocessor then appends that `Transform3d` to every result it sends back to the robot controller, where the `PhotonPoseEstimator` object then applies the transformation to the camera pose (If no `Transform3d` is passed to `PhotonCamera`, then no `Transform3d` is appended to the results, and `PhotonPoseEstimator` simply won't work. The result stores the `Transform3d` as an optional)

## That seems complicated and silly. Why not just keep the robot to camera transform in PhotonLib?

We used to! However, a new algorithm for fusing gyroscopic data to PnP pose observations has been added to PhotonVision, Constrained PnP, offering significantly improved accuracy and stability. Notably, this algorithm requires the robot-to-camera transform to run properly. In previous seasons, this wasn't a problem, as Constrained PnP ran entirely on the RoboRIO anyways, in order to access robot gyro data. However, Constrained PnP is very computationally expensive, and so work is being done to offload the work to the coprocessor, and expose gyro data to the coprocessor. The switch to having the robot-to-camera transform exposed to the coprocessor over NetworkTables and becoming an integrated part of Photonvision results is to facilitate the offloading of the Constrained PnP workload (and other estimation workloads that require Gyroscopic data) to the coprocessor
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
package org.photonvision.common.dataflow.networktables;

import java.util.List;
import java.util.function.BiConsumer;
import java.util.function.BooleanSupplier;
import java.util.function.Consumer;
import java.util.function.Supplier;
Expand All @@ -35,6 +36,7 @@
import org.wpilib.networktables.NetworkTable;
import org.wpilib.networktables.NetworkTableEvent;
import org.wpilib.networktables.NetworkTablesJNI;
import org.wpilib.networktables.TimestampedObject;

public class NTDataPublisher implements CVPipelineResultConsumer {
private final Logger logger = new Logger(NTDataPublisher.class, LogGroup.General);
Expand All @@ -55,20 +57,25 @@ public class NTDataPublisher implements CVPipelineResultConsumer {
private final Consumer<Integer> fpsLimitConsumer;
private final Supplier<Integer> fpsLimitSupplier;

NTDataChangeListener robotToCameraListener;
private final BiConsumer<Double, Transform3d> robotToCameraConsumer;

public NTDataPublisher(
String cameraNickname,
Supplier<Integer> pipelineIndexSupplier,
Consumer<Integer> pipelineIndexConsumer,
BooleanSupplier driverModeSupplier,
Consumer<Boolean> driverModeConsumer,
Supplier<Integer> fpsLimitSupplier,
Consumer<Integer> fpsLimitConsumer) {
Consumer<Integer> fpsLimitConsumer,
BiConsumer<Double, Transform3d> robotToCameraConsumer) {
this.pipelineIndexSupplier = pipelineIndexSupplier;
this.pipelineIndexConsumer = pipelineIndexConsumer;
this.driverModeSupplier = driverModeSupplier;
this.driverModeConsumer = driverModeConsumer;
this.fpsLimitSupplier = fpsLimitSupplier;
this.fpsLimitConsumer = fpsLimitConsumer;
this.robotToCameraConsumer = robotToCameraConsumer;

updateCameraNickname(cameraNickname);
updateEntries();
Expand Down Expand Up @@ -98,6 +105,15 @@ private void onPipelineIndexChange(NetworkTableEvent entryNotification) {
logger.debug("Set pipeline index to " + newIndex);
}

private void onRobotToCameraChange(NetworkTableEvent entryNotification) {
// HACK: the entryNotification's value can't be cast to Transform3d, so we read directly from
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth making an upstream issue against wpilib to support structs here

// the subscriber
TimestampedObject<Transform3d> robotToCamera = ts.robotToCameraSubscriber.getAtomic(null);
var doubleTime = (robotToCamera.serverTime) * (double) 1e6;
robotToCameraConsumer.accept(doubleTime, robotToCamera.value);
logger.debug("Sampled robot to camera transform as " + robotToCamera + " at t=" + doubleTime);
}

private void onDriverModeChange(NetworkTableEvent entryNotification) {
var newDriverMode = entryNotification.valueData.value.getBoolean();
var originalDriverMode = driverModeSupplier.getAsBoolean();
Expand Down Expand Up @@ -134,6 +150,7 @@ private void updateEntries() {
if (pipelineIndexListener != null) pipelineIndexListener.remove();
if (driverModeListener != null) driverModeListener.remove();
if (fpsLimitListener != null) fpsLimitListener.remove();
if (robotToCameraListener != null) robotToCameraListener.remove();

ts.updateEntries();

Expand All @@ -148,6 +165,10 @@ private void updateEntries() {
fpsLimitListener =
new NTDataChangeListener(
ts.subTable.getInstance(), ts.fpsLimitSubscriber, this::onFPSLimitChange);

robotToCameraListener =
new NTDataChangeListener(
ts.subTable.getInstance(), ts.robotToCameraSubscriber, this::onRobotToCameraChange);
}

public void updateCameraNickname(String newCameraNickname) {
Expand Down Expand Up @@ -187,7 +208,8 @@ public void accept(CVPipelineResult result) {
now + offset,
NetworkTablesManager.getInstance().getTimeSinceLastPong(),
TrackedTarget.simpleFromTrackedTargets(acceptedResult.targets),
acceptedResult.multiTagResult);
acceptedResult.multiTagResult,
acceptedResult.robotToCamera);

// random guess at size of the array
ts.resultPublisher.set(simplified, 1024);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,6 @@ public FileFrameProvider(Path path, double fov, int maxFPS) {
this(path, fov, maxFPS, null);
}

public FileFrameProvider(Path path, double fov, CameraCalibrationCoefficients calibration) {
this(path, fov, MAX_FPS, calibration);
}

public FileFrameProvider(
Path path, double fov, int maxFPS, CameraCalibrationCoefficients calibration) {
if (!Files.exists(path))
Expand Down Expand Up @@ -97,6 +93,10 @@ public FileFrameProvider(Path path, double fov) {
this(path, fov, MAX_FPS);
}

public FileFrameProvider(Path path, double fov, CameraCalibrationCoefficients calibration) {
this(path, fov, MAX_FPS, calibration);
}

@Override
public CapturedFrame getInputMat() {
var out = new CVMat();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import org.photonvision.common.configuration.ConfigManager;
import org.photonvision.common.dataflow.structures.Packet;
import org.photonvision.common.logging.LogGroup;
Expand Down Expand Up @@ -63,12 +64,13 @@ public class AprilTagPipeline extends CVPipeline<CVPipelineResult, AprilTagPipel
private static final FrameThresholdType PROCESSING_TYPE = FrameThresholdType.GREYSCALE;

public AprilTagPipeline() {
super(PROCESSING_TYPE);
super(PROCESSING_TYPE, (time) -> null);
settings = new AprilTagPipelineSettings();
}

public AprilTagPipeline(AprilTagPipelineSettings settings) {
super(PROCESSING_TYPE);
public AprilTagPipeline(
AprilTagPipelineSettings settings, Function<Long, Transform3d> robotToCameraSampler) {
super(PROCESSING_TYPE, robotToCameraSampler);
this.settings = settings;
}

Expand Down Expand Up @@ -247,7 +249,13 @@ protected CVPipelineResult process(Frame frame, AprilTagPipelineSettings setting
var fps = fpsResult.output;

return new CVPipelineResult(
frame.sequenceID, sumPipeNanosElapsed, fps, targetList, multiTagResult, frame);
frame.sequenceID,
sumPipeNanosElapsed,
fps,
targetList,
multiTagResult,
frame,
Optional.ofNullable(robotToCameraSampler.apply(frame.timestampNanos)));
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import org.opencv.core.Mat;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.Objdetect;
Expand Down Expand Up @@ -55,12 +56,13 @@ public class ArucoPipeline extends CVPipeline<CVPipelineResult, ArucoPipelineSet
private final CalculateFPSPipe calculateFPSPipe = new CalculateFPSPipe();

public ArucoPipeline() {
super(FrameThresholdType.GREYSCALE);
super(FrameThresholdType.GREYSCALE, (time) -> null);
settings = new ArucoPipelineSettings();
}

public ArucoPipeline(ArucoPipelineSettings settings) {
super(FrameThresholdType.GREYSCALE);
public ArucoPipeline(
ArucoPipelineSettings settings, Function<Long, Transform3d> robotToCameraSampler) {
super(FrameThresholdType.GREYSCALE, robotToCameraSampler);
this.settings = settings;
}

Expand Down Expand Up @@ -235,7 +237,13 @@ protected CVPipelineResult process(Frame frame, ArucoPipelineSettings settings)
var fps = fpsResult.output;

return new CVPipelineResult(
frame.sequenceID, sumPipeNanosElapsed, fps, targetList, multiTagResult, frame);
frame.sequenceID,
sumPipeNanosElapsed,
fps,
targetList,
multiTagResult,
frame,
Optional.ofNullable(robotToCameraSampler.apply(frame.timestampNanos)));
}

private void drawThresholdFrame(Mat greyMat, Mat outputMat, int windowSize, double constant) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,31 @@

package org.photonvision.vision.pipeline;

import java.util.function.Function;
import org.photonvision.vision.camera.QuirkyCamera;
import org.photonvision.vision.frame.Frame;
import org.photonvision.vision.frame.FrameStaticProperties;
import org.photonvision.vision.frame.FrameThresholdType;
import org.photonvision.vision.opencv.Releasable;
import org.photonvision.vision.pipeline.result.CVPipelineResult;
import org.wpilib.math.geometry.Transform3d;

public abstract class CVPipeline<R extends CVPipelineResult, S extends CVPipelineSettings>
implements Releasable {
protected S settings;
protected FrameStaticProperties frameStaticProperties;
protected QuirkyCamera cameraQuirks;
protected Function<Long, Transform3d> robotToCameraSampler;

private final FrameThresholdType thresholdType;

// So releaseable doesn't keep track of if we double-free something. so (ew) remember that here
protected volatile boolean released = false;

public CVPipeline(FrameThresholdType thresholdType) {
public CVPipeline(
FrameThresholdType thresholdType, Function<Long, Transform3d> robotToCameraSampler) {
this.thresholdType = thresholdType;
this.robotToCameraSampler = robotToCameraSampler;
}

public FrameThresholdType getThresholdType() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ public Calibrate3dPipeline() {
}

public Calibrate3dPipeline(int minSnapshots) {
super(PROCESSING_TYPE);
super(PROCESSING_TYPE, (time) -> null);
this.settings = new Calibration3dPipelineSettings();
this.foundCornersList = new ArrayList<>();
this.minSnapshots = minSnapshots;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@

import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import org.opencv.core.Point;
import org.photonvision.vision.frame.Frame;
import org.photonvision.vision.frame.FrameThresholdType;
Expand All @@ -31,6 +33,7 @@
import org.photonvision.vision.pipeline.result.CVPipelineResult;
import org.photonvision.vision.target.PotentialTarget;
import org.photonvision.vision.target.TrackedTarget;
import org.wpilib.math.geometry.Transform3d;
import org.wpilib.math.util.Pair;

public class ColoredShapePipeline
Expand All @@ -54,12 +57,13 @@ public class ColoredShapePipeline
private static final FrameThresholdType PROCESSING_TYPE = FrameThresholdType.HSV;

public ColoredShapePipeline() {
super(PROCESSING_TYPE);
super(PROCESSING_TYPE, (time) -> null);
settings = new ColoredShapePipelineSettings();
}

public ColoredShapePipeline(ColoredShapePipelineSettings settings) {
super(PROCESSING_TYPE);
public ColoredShapePipeline(
ColoredShapePipelineSettings settings, Function<Long, Transform3d> robotToCameraSampler) {
super(PROCESSING_TYPE, robotToCameraSampler);
this.settings = settings;
}

Expand Down Expand Up @@ -213,6 +217,12 @@ protected CVPipelineResult process(Frame frame, ColoredShapePipelineSettings set
var fpsResult = calculateFPSPipe.run(null);
var fps = fpsResult.output;

return new CVPipelineResult(frame.sequenceID, sumPipeNanosElapsed, fps, targetList, frame);
return new CVPipelineResult(
frame.sequenceID,
sumPipeNanosElapsed,
fps,
targetList,
frame,
Optional.ofNullable(robotToCameraSampler.apply(frame.timestampNanos)));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,15 @@
package org.photonvision.vision.pipeline;

import java.util.List;
import java.util.function.Function;
import org.photonvision.common.util.math.MathUtils;
import org.photonvision.vision.frame.Frame;
import org.photonvision.vision.frame.FrameThresholdType;
import org.photonvision.vision.pipe.impl.CalculateFPSPipe;
import org.photonvision.vision.pipe.impl.Draw2dCrosshairPipe;
import org.photonvision.vision.pipe.impl.ResizeImagePipe;
import org.photonvision.vision.pipeline.result.DriverModePipelineResult;
import org.wpilib.math.geometry.Transform3d;
import org.wpilib.math.util.Pair;

public class DriverModePipeline
Expand All @@ -36,12 +38,13 @@ public class DriverModePipeline
private static final FrameThresholdType PROCESSING_TYPE = FrameThresholdType.NONE;

public DriverModePipeline() {
super(PROCESSING_TYPE);
super(PROCESSING_TYPE, (time) -> null);
settings = new DriverModePipelineSettings();
}

public DriverModePipeline(DriverModePipelineSettings settings) {
super(PROCESSING_TYPE);
public DriverModePipeline(
DriverModePipelineSettings settings, Function<Long, Transform3d> robotToCameraSampler) {
super(PROCESSING_TYPE, robotToCameraSampler);
this.settings = settings;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

package org.photonvision.vision.pipeline;

import java.util.function.Function;
import org.opencv.core.Mat;
import org.photonvision.common.util.math.MathUtils;
import org.photonvision.vision.frame.Frame;
Expand All @@ -26,6 +27,7 @@
import org.photonvision.vision.pipe.impl.FocusPipe;
import org.photonvision.vision.pipe.impl.ResizeImagePipe;
import org.photonvision.vision.pipeline.result.FocusPipelineResult;
import org.wpilib.math.geometry.Transform3d;

public class FocusPipeline extends CVPipeline<FocusPipelineResult, FocusPipelineSettings> {
private final FocusPipe focusPipe = new FocusPipe();
Expand All @@ -35,12 +37,13 @@ public class FocusPipeline extends CVPipeline<FocusPipelineResult, FocusPipeline
private static final FrameThresholdType PROCESSING_TYPE = FrameThresholdType.NONE;

public FocusPipeline() {
super(PROCESSING_TYPE);
super(PROCESSING_TYPE, (time) -> null);
settings = new FocusPipelineSettings();
}

public FocusPipeline(FocusPipelineSettings settings) {
super(PROCESSING_TYPE);
public FocusPipeline(
FocusPipelineSettings settings, Function<Long, Transform3d> robotToCameraSampler) {
super(PROCESSING_TYPE, robotToCameraSampler);
this.settings = settings;
}

Expand Down
Loading
Loading