Skip to content

Global consistency loss appears like val loss #9

@GozdeUnver

Description

@GozdeUnver

Hello,
I went through your code and it looks like in train.py the global consistency is calculated and printed for the validation. Even though this doesn't affect the model training, I mean the global consistency loss is still affecting the model in train mode, it is printed out as validation. Additionally, there is no loop for validation run. :

for j in range(0, len(src_tiles), tile_batch_size):
      src_tiles_ = src_tiles[j: j + tile_batch_size]
      dst_tiles_ = dst_tiles[j: j + tile_batch_size]
      dst_fake_tiles_ = dst_fake_tiles[j: j + tile_batch_size]

      # pass real and fake images
      model.set_input_image({'src_real': src_tiles_, 'dst_real': dst_tiles_, 'dst_fake': dst_fake_tiles_})

      # calculate style and content losses, gradients, update network weights
      losses = model.optimize_parameters_image()
      iter_count += 1

      # free up gpu memory
      delete_tensor_gpu({'src': src_tiles_, 'dst': dst_tiles_, 'dst_fake': dst_fake_tiles_})

      # loss logging
      if iter_count % loss_logging_freq == 0:
          losses = {k: round(v, 4) for k, v in losses.items()}
          print('val losses: ', losses)``

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions