Bayesian Networks can be seen as Markov Random Fields (MRFs), let’s see how: center If there is a single edge, then it’s easy since the function can be interpreted as . center In the case there are more edges, then the parametrization is not unique, but it’s quite easy too, since we can interpret the potential function as: center In this case, as we can see that we can model the potential functions as shown in the figure. Even here the parametrization is not unique, since we could bring the inside the .

The problem

The three cases we’ve seen before are quite easy to represent using a MRF, but what if we have something like this? center We can see that and are dependent given in the Bayesian Network, but not in the MRF.

The solution is to moralize the parents, meaning that we have to connect the parents: center In order to have

P(A, B, C) \propto \phi(A, C)\phi(B, C) \phi(A, B)$$ This has to be done every time two nodes share a child, doing this we loose the marginal independence of the parents (if we don't observe the child $C$, the nodes are independent from each other). The fact that we have to do these tricks in order to convert Bayes nets to MRFs gives us a hint that the class of directed graphs and the class of undirected graphs are different things, but they overlap in some cases. ![[Screenshot 2023-07-16 at 5.15.49 PM.webp| center | 400]] --- tags: #probability-theory - #graph-theory