Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Confusion on the inference pipeline #105

@AmmonZ

Description

@AmmonZ

Can anyone share their opinion to help me clear up my confusion? I understand that the main idea behind DeepSDF is to use features that "embed" the underlying information of the mesh, which is driven by the sdf. These features can be optimized during testing. However, in the reconstruct.py file, I noticed that DeepSDF directly uses the ShapeNet processed SDF ground truth validation set (gt val) as the gt_sdf to supervise the optimization of the features during inference.

My question is whether it is accurate to say that DeepSDF uses ground truth testing meshes from ShapeNet to evaluate its performance. The reason I ask is that this seems to be the procedure: testing gt meshes -> testing gt sdfs -> using them as supervision for optimization during inference. If this is the case, then any compared methods should also use ground truth testing meshes for fairness, correct?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions