Knowledge Graph Embedding by Translating on Hyperplanes
- Zhen Wang ,
- Jianwen Zhang ,
- Jianlin Feng ,
- Zheng Chen
Twenty-Eighth AAAI Conference on Artificial Intelligence |
Published by AAAI - Association for the Advancement of Artificial Intelligence
We deal with embedding a large scale knowledge graph
composed of entities and relations into a continuous vector space. TransE is a
promising method proposed recently, which is very efficient while achieving
state-of-the-art predictive performance. We discuss some mapping properties of
relations which should be considered in embedding, such as reflexive,
one-to-many, many-to-one, and many-to-many. We note that TransE does not do
well in dealing with these properties. Some complex models are capable of preserving
these mapping properties but sacrifice efficiency in the process. To make a
good trade-off between model capacity and efficiency, in this paper we propose
TransH which models a relation as a hyperplane together with a translation
operation on it. In this way, we can well preserve the above mapping properties
of relations with almost the same model complexity of TransE. Additionally, as
a practical knowledge graph is often far from completed, how to construct
negative examples to reduce false negative labels in training is very
important. Utilizing the one-to-many/many-to-one mapping property of a
relation, we propose a simple trick to reduce the possibility of false negative
labeling. We conduct extensive experiments on link prediction, triplet classification
and fact extraction on benchmark datasets like WordNet and Freebase.
Experiments show TransH delivers significant improvements over TransE on
predictive accuracy with comparable capability to scale up.