## The topology of a mathematics book

What does the structure of a math book really look like? What theorems are the most important ones?

Reading and trying to understand a mathematics book is a fun and usually exciting experience. Still, sometimes it feels like when you are deeply buried in the Wikipedia jungle, constantly being sent back and forth between prerequisites and dependencies unable to see the forest for the trees.

The thing about mathematics is that a theorem can depend on another theorem which again can depend on lemmas, propositions, etc. This structure is actually a graph and in order to visualize this, I have taken as an example, a book in abstract algebra and created some P*ython* code that converts that book to a graph in the graph database *Neo4j*.

You don’t need to know anything at all about Python and Neo4j in order to follow along in this article.

Specifically, I chose the book *Abstract Algebra by Justin R. Smith *which you can find online if you are curious. I have no special attachment to that book other than that I found it well-structured and easily available.

It contains what you’d expect from any abstract algebra book: basic number theory, group theory, ring theory, module and vector space theory, field theory, Galois theory, and a few tasty add-ons such as category theory and group representations.

We will try to understand how these theories are woven together and unravel some of this structure to see if we can make sense of it.

The book graph consists of 6 different kinds of vertices (these are sometimes called nodes by the way), namely CHAPTER (orange), DEFINITION (blue), THEOREM (red), PROPOSITION (beige), LEMMA (green), and COROLLARY (pink).

There are only two kinds of edges (sometimes called relations or relationships) namely IN_CHAPTER and DEPENDS_ON.

We can visualize all the possible ways the different types of vertices can connect by the following *schema*:

Of course, the label THEOREM represents a theorem from the book and the name indicates what theorem it is. The same goes for the other vertices. All the vertices have an IN_CHAPTER relationship to a CHAPTER vertex indicating in which chapter they are defined.

If a specific statement is mentioned in the proof of some other statement then there will be a DEPENDS_ON relationship in the obvious direction. When we load the whole book into the graph, the 533 pages of mathematics look like the featured image of this article.

The chapters seem to form big islands that connect to each other through the various proven statements. By this image, we already get an idea about how some results are more important than others, but how do we actually define what we mean by that in the first place?

This is where the graph and all its built-in algorithms come into the picture. Putting it in a graph is not only providing us with a nice visual representation of the book, but it also makes an ocean of powerful algorithms available to us.

In the following sections, we will explore this graph together, unlocking topological secrets about the structure of an abstract algebra book!

Of course, I have made some assumptions when writing the underlying code for parsing the text. It is probably not completely error-free. But the good news is that we can sanity-check a lot of our graph results by looking at the actual book.

One of the first questions that entered my mind when I created this graph was to try to find the vertex with the highest degree (in-going relationships). In plain English: what result has the greatest number of other results depending (mentioning it in their proofs) directly on it?

It turns out, that 4 vertices have a degree of 4 which is the highest (ingoing) degree in the graph.

The four results are:

- LEMMA 3.1.5.
- LEMMA 4.6.13.
- PROPOSITION 6.2.17.
- LEMMA 8.6.2.

Let’s take one of them. Say LEMMA 3.1.5. The Lemma is as follows.

This result is sometimes called *Bézout’s Identity*.

Let us now see what its neighborhood looks like.

There are 3 propositions and a theorem that depends on LEMMA 3.1.5. Unfortunately, their names are too long to be on the nodes physically but they are there as properties on the nodes behind the scenes and we can make so-called Cypher queries and use them if we want.

If we unpack its neighbors’ neighborhoods we get

Wow, the theorem depending on LEMMA 3.1.5 seems to be important. Let’s look that up. It turns out to be THEOREM 11.5.1.

It doesn’t really matter what this means, but the proof is about two full pages, so it seems fair that it has a bunch of dependencies.

Going a little further than this and including CHAPTER vertices reveals that LEMMA 3.1.5 actually links several different chapters together topologically speaking.

Okay but what if we want to know which result has the most dependencies?

No problem, we simply write a little Cypher query to ask our graph that question and voillá.

What’d you know, THEOREM 11.5.1 lands right on the top of the list depending on 8 other proven statements! Of course, we knew that it depended on 8 other results from the above investigation but not that it was the top one.

As mentioned above, putting the book in a graph unlocks a world of tools — so-called graph algorithms. In this section, we will run some of those to understand the structure of our book better.

The first thing to do when you want to run data science algorithms on such a graph database is to create a relevant subgraph that only contains the necessary stuff. This is to ensure that the runtimes of these algorithms are not too big.

## Subgraphs

Luckily I knew this in advance so I created a label that all the proven statements have, namely the label RESULT. In this way, we can create a subgraph only containing these proven statements and the relationship DEPENDS_ON and call it “book”. For the interested reader, we can do this in neo4j in the following way.

CALL gds.graph.create(‘book’, ‘RESULT’, ‘DEPENDS_ON’)

Note however that the syntax depends very much on your version of Neo4j!

Now that we have this subgraph to work with, let’s try to see what results are most important for the efficient flow of knowledge through the graph.

## Betweenness centrality

We can do this through an algorithm known as *betweenness centrality*. This is basically measuring what you get if you take all your results and try to see how you get from one result to another through dependencies as fast as possible (i.e. with the least amount of dependencies along the way), and do this for each pair of results, then what result appear most frequently in these shortest paths?

Let’s run the algorithm.

Not surprisingly at this point perhaps, LEMMA 3.1.5 comes in at a second place. The top proposition turns out to be a neighbor to the lemma. In fact, it is the neighbor to the northeast of the lemma in the image from before i.e.

Taking a look at for example LEMMA 8.6.2 in chapter 8, we see that it too is quite important for binding the graph together.

This Lemma is a fundamental result in Galois theory and it basically says that it is possible to completely characterize when field extensions are Galois.

## PageRank

Let us turn to the famous *PageRank* algorithm.

PageRank is an algorithm used by Google to rank web pages in their search engine results.

“PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.”

~Wikipedia.

We can treat the results in the book as web pages and dependencies as links between them. When we do this and run the algorithm, we get the following.

Not surprisingly the LEMMA 3.1.5 has come back to haunt us yet again, closely followed by PROPOSITION 3.1.1 which is the only result LEMMA 3.1.5 depends on.

PROPOSITION 3.1.1 is the following:

This is a basic number theoretic result.

## Community detection

Let us try another famous algorithm. The Louvain algorithm for community detection. We can see the graph as clustering together small clusters or communities. Louvain can be explained by the following.

The Louvain method is an algorithm to detect communities in graphs. It evaluates how much more densely connected the nodes within a community are, compared to how connected they would be in a random graph.

~Neo4j

When we run this algorithm and ask it which communities have the most vertices, we get the following list.

So it looks like the greatest community is the one with *id = 14*. This of course doesn’t tell us anything so let’s look it up. Amazingly when we ask to see the biggest community it is a monster.

This cluster is actually tying together chapters 3, 4, 5, 7, and 12. These are number theory, group theory, ring theory, field theory, and algebraic geometry respectively. This community is showing how algebraic geometry and field theory depends on ring theory and how ring theory and group theory depend on number theory!

## Similarity

The next obvious thing to try is called *node similarity*. There are many variants of this family of algorithms but the common trait is that they try to find “similar” vertices based on the relations in the graph.

We can imagine that this is useful when studying mathematics. When you are studying a theorem, it might be worth studying theorems with similar dependencies while you have all that knowledge in the air anyway.

Let us run such an algorithm on our book graph and take one of the top results.

The two similar results are LEMMA 4.6.19 and THEOREM 4.6.16 right below it in the above image. Even though they are not directly dependent and are not the same type of result, they share topology!

They are both dependencies to LEMMA 4.6.13, and they both depend on a theorem which in turn is connected to a corollary (in each direction though). They are also both connected indirectly and in opposite directions to LEMMA 4.6.24.

The choice of taking a math book and mapping it out like this is somewhat arbitrary. One could imagine law texts, documents linking and other interesting use cases of this. For example, in a law text, we could examine the paragraphs with most references to other paragraphs or even check our own texts for circular references!

This is more than a tool for the lazy (or very busy?) student who wants to know which theorems are most important in order to save time.

I could imagine several interesting analyses using this and I have highlighted a few ideas in this article.

In the future, I will try this technology on other texts than mathematics but of course also on other math books. How about a title like *“the topology of a topology book”*? without it being clickbait!

If you have a well-structured math book or some great idea on how to use this on some text, let me know, please.

*If you like to read articles like this one on Medium, you can get a membership for full access and support my writing in the process at no extra cost to you: simply click **here**.*

Thanks for reading.

## Leave A Comment