Why GCN? What GCN? How GCN?


Graphs can be used to represent a lot of useful, real world datasets such as social networks, web link data, molecular structures, geographical maps, etc. Apart from these cases which have a natural structure (Euclidean Data) to them, non-structured data such as images and text can also be modelled in the form of graphs in order to perform graph analysis on them. Due to the expressiveness of graphs and a tremendous increase in the available graph data supplemented by computational power in recent times, a good amount of attention has been directed towards the machine learning way of analysing graphs. So we’ll look into one such model called the Graph Convolutional Network.


  • Link Prediction: Given an incomplete network, predict whether two nodes are likely to have a link.

    • Friend recommendation in social networks as shown in Fig 1.

      FB Link Prediction

      Fig 1: Facebook suggesting me friends.
    • Product Recommendation in Econometrics

    • Knowledge Graph Completion

      FB Link Prediction

      Fig 2: Completing the relation between the nodes.
    • Reaction Prediction in metabolic networks


\[\sum_{n=1}^\infty 1/n^2 = \frac{\pi^2}{6}\]

Convolutional Neural Networks(CNNs) are really powerful, they have the capacity to learn very high dimensional data. Say you have a 512x512 pixels image. The dimensionality here is approximately 1 million. For 10 samples the space becomes \(10^{1,000,000}\). But there is a catch! These data points like images, videos, sounds all have a specific compositionality which is one of the strong assumptions we made before using CNNs. CNNs extract the compositional features and feed them to the classifier.

What do I mean by compositionality?

The Key properties of the assumption of compositionality are

  • Locality

  • Stationarity or Translation Invariance

  • Multi Scale : Learning Hierarchies of representations

But not all types of data lie on Euclidean Space




Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. Nips, (Nips), 1–14. http://arxiv.org/abs/1606.09375

Kipf, T. N., & Welling, M. (2016). Semi-Supervised Classification with Graph Convolutional Networks, 1–14. http://arxiv.org/abs/1609.02907

Kipf, T. N., & Welling, M. (2016). Variational Graph Auto-Encoders. Nipsw, (2), 1–3. http://arxiv.org/abs/1611.07308

Geometric Deep Learning on Graphs and Manifolds https://www.youtube.com/watch?v=LvmjbXZyoP0