## tl;dr

Factorization approaches model entity/relations as low-rank vectors and then recover the adjacency tensor by some multi-linear product of the components (e.g., dot product). Some of these operations are limited in expressive power, however. For example, the dot product is symmetric (i.e., \mathbf a \cdot \mathbf b = \mathbf b \cdot \mathbf a) and this score function can therefore not model that the probability may be different depending on the order, which means the probability that a is connected to b may not be the same as the probability that b is connected to a. By using complex-valued embeddings, however, the dot product becomes the Hermitian product, and this is not symmetric, but it maintains the same linear scaling property that makes dot product so valuable.

### Details

The authors restrict the class of complex matrices to “normal matrices”, which requires that the complex conjugate is equal to the inverse, which in turn prevents the need to calculate the inverse for the spectral decomposition. They indicate that this class of matrices is enough to capture all the things relevant to the problem (e.g., symmetric/anti-symmetric sign matrices, all orthogonal matrices, assignment matrices used to represent bipartite graphs…etc).