Online Influence Maximization under Linear Threshold Model
- Shuai Li ,
- Fang Kong ,
- Kejie Tang ,
- Qizhi Li ,
- Wei Chen
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS) |
Some typos in the original NeurIPS'2020 paper have been fixed.
Online influence maximization (OIM) is a popular problem in social networks to learn influence propagation model parameters and maximize the influence spread at the same time. Most previous studies focus on the independent cascade (IC) model under the edge-level feedback. In this paper, we address OIM in the linear threshold (LT) model. Because node activations in the LT model are due to the aggregated effect of all active neighbors, it is more natural to model OIM with the nodel-level feedback. And this brings new challenge in online learning since we only observe aggregated effect from groups of nodes and the groups are also random. Based on the linear structure in node activations, we incorporate ideas from linear bandits and design an algorithm LT-LinUCB that is consistent with the observed feedback. By proving group observation modulated (GOM) bounded smoothness property, a novel result of the influence difference in terms of the random observations, we provide a regret of order $\tilde{O}(poly(m) \sqrt{T})$, where m is the number of edges and T is the number of rounds. This is the first theoretical result in such order for OIM under the LT model. In the end, we also provide an algorithm OIM-ETC with regret bound $O(poly(m) T^{2/3})$, which is model-independent, simple and has less requirement on online feedback and offline computation.