
Mini-batch discrimination
Mini-batch discrimination is another approach to stabilize the training of GANs. It was proposed by Ian Goodfellow and others in Improved Techniques for Training GANs, which is available at https://arxiv.org/pdf/1606.03498.pdf. To understand this approach, let's first look in detail at the problem. While training GANs, when we pass the independent inputs to the discriminator network, the coordination between the gradients might be missing, and this prevents the discriminator network from learning how to differentiate between various images generated by the generator network. This is mode collapse, a problem we looked at earlier. To tackle this problem, we can use mini-batch discrimination. The following diagram illustrates the process very well:

Mini-batch discrimination is a multi-step process. Perform the following steps to add mini-batch discrimination to your network:
- Extract the feature maps for the sample and multiply them by a tensor,
, generating a matrix,
.
- Then, calculate the L1 distance between the rows of the matrix
using the following equation:

- Then, calculate the summation of all distances for a particular example,
:

- Then, concatenate
with
and feed it to the next layer of the network:


To understand this approach mathematically, let's take a closer look at the various notions:
: The activation or feature maps for
sample from an intermediate layer in the discriminator network
: A three-dimensional tensor, which we multiply by
: The matrix generated when we multiply the tensor T and
: The output after taking the sum of all distances for a particular example,
Mini-batch discrimination helps prevent mode collapse and improves the chances of training stability.