Skip to content

Commit 59ce286

Browse files
authored
Important node to gradient penalty for wgangp
As recommended in https://arxiv.org/pdf/1704.00028.pdf, using WGAN-GP loss needs not to use a normalization layer.
1 parent a0f535f commit 59ce286

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

models/networks.py

+1
Original file line numberDiff line numberDiff line change
@@ -281,6 +281,7 @@ def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', const
281281
lambda_gp (float) -- weight for this loss
282282
283283
Returns the gradient penalty loss
284+
NOTE: Strongly advised not to use batch/instance norm with the Discriminator(or Critic) if using gradient penalty!
284285
"""
285286
if lambda_gp > 0.0:
286287
if type == 'real': # either use real images, fake images, or a linear interpolation of two.

0 commit comments

Comments
 (0)