Download PDFOpen PDF in browserOn Projectivity in Markov Logic NetworksEasyChair Preprint 866820 pages•Date: August 11, 2022AbstractMarkov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains. Keyphrases: Markov Logic Networks, Projectivitiy, Statistical Relational Learning, Weighted First Order Model Counting, lifted inference
|