Skip to content

How to pre-train the RegNet #3

@lionlai1989

Description

@lionlai1989

Now I want to pre-train the RegNet with sentinel-2 and SPOT images. But I don't know how to pre-train RegNet without knowing the ground truth.

In your paper, it said

The input data to be used for the pretraining of RegNet
are the feature maps produced by the pretrained SISRNet for
the images in the training set.

Then the next sentence is

As described in Sec. IV-B, the input to RegNet are N feature maps from images of the
same scene. These feature maps are then synthetically shifted with respect to the first one by a random integer amount of pixels. The purpose is to create a balanced dataset where all
possible K 2 classes (shifts) are seen by the network. The desired output is a filter with all zeros except for a one in the position corresponding to the chosen shift.

I don't really understand this part. Where do I get the random integer amount of pixels from? If it's random, does it mean the ground truth is a random vector with a component being 1 and the rest are 0?

My question is what is ground truth data when pretraining the RegNet?

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions