Hi in the repo https://github.com/huochaitiantang/pytorch-deep-image-matting/
the alpha prediction loss is calculated only for pixels that are unknown in trimap
"""
wi = torch.zeros(trimap.shape)
wi[trimap == 128] = 1.
t_wi = wi.cuda()
unknown_region_size = t_wi.sum()
"""
while training are you doing something similar or is it for all pixels?
Sorry couldn't find this in paper or code