You may have heard of NaNoWriMo — National Novel Writing Mo — which is an even where aspiring authors attempt to start and finish a 50,000-word novel in the month of November. NaNoGenMo is a similar event that simply challenges aspiring authors to write code that will generate a 50,000 word novel.

This blog post is the story of my NaNoGenMo effort for 2017, which culminated in The Several Houses of Brian, Spencer, Liam, Victoria, Brayden, Vincent, and Alex, an 800-page novel (PDF download) generated by a Python script. I’m sharing this because I’m pretty happy with the outcome, and I learned a lot about Python in the process.

I’ve attempted NaNoGenMo every year since 2014, and completed it just twice. (Although, to be precise, I was late by a few days both times.) In 2015, I made a graphic novel from youtube videos. (HTML version.)

This year, I wanted to make a children’s book, so I returned to an idea I’d been toying with in various iterations. Do you know the nursery rhyme, “The House that Jack Built”? My kids have an illustrated book version that I kind of like reading, but mainly I was intrigued by the way it adds a line for each stanza. Within the narrative of the poem, there’s no real reason it has to stop with the farmer, and it very quickly gets away from anything having to do with the actual house, so I figured that given a plausible understanding of relationships between objects, one could generate an infinitely long version of those poem.

When I started working on this for 2016, I made some progress, but got hung up on the rhyme scheme. Working with the Wordnik API and the excellent Pronouncing package for Python, I managed to generate decent-sounding alliterative groups like:

pants and parole
pasture and poll

dyed and designed
menaced and mined

… where were pretty good, but didn’t help with the noun->verb->noun accumulation of objects and lines that the poem’s structure demanded.

The real break through was discovering, which is a human-sourced database of concepts, which in this sense means “things and the ways in which those things relate to other things.”

Conveniently, ConceptNet has an open API, and with a little practice, I found I could run queries that generate a list of concepts for a given entry point. For example, using the “ReceivesAction” relationship type, it’s possible to learn that, so far as ConceptNet knows, a house is something that can be built, built of brick, divided into several rooms, found in a neighborhood, or located on an estate.

A little Python function makes this a bit tidier, so I can just ask it for that relationship and get a list of the text:


def get_some(word,rel,direction,number):
    global concept_cache    
    if (word in concept_cache.keys()):
        return concept_cache[word]
        url = '' + word + '&rel=/r/' + rel + '&limit='+number
        objects = requests.get(url).json()
        things = []
        for thing in objects['edges']:
            directions = simpler(thing[direction]['label']).lower()
            if (simpler(word) not in directions):
                contributors = contrib(thing['sources'])
        concept_cache[word] = things        
    return things

This function also adds to and checks with a cache of concepts, to reduce the load on ConceptNet, and it adds the ConcepNet contributors to a list so that they can be properly credited if this concept ends up being used in the book.

Pretty neat, I guess!

Text Generation

Now, the basic idea for the text of the book is to find actions and objects that take actions. I originally tried to build this starting from “the house” and working backward. For example, I would go like:

  1. house
  2. something that can be done to a house + 1
  3. something that can do 2 + 2 + 1
  4. something that can be done to 3 + 3 + 2 + 1

This method made some interesting results, but it didn’t end up working out. Mainly, it took me a while to figure out because I had to learn how to deal with data structures in Python — like, I made an object class that had “next links” as a property that looks for next links in the chain, and creates instances of itself within itself until it runs out. The problem with this is I couldn’t figure out how to walk back through that structure and build a sequential chain, and also, as I later realized, the number of plausible next-links increases geometrically with each generation, this approach quickly filled up my memory without doing anything.

I got stuck here for a while, but I finally hit on something that worked: a list of lists of chained concepts and an algorithm that works through that stack chain by chain, adding new links for every generation. The code for this function is kind of long to paste here, so here’s a flowchart of how it works:



In text form, this creates a list of chains, then for each generation of the chain, it checks the last-added concept for concepts to connect off of the “CapableOf” relationship. For example, if it starts with “Cheetah”, then it knows that among other things a cheetah can “run faster than a person.” For the next generation, it isolates “person” as the object in the previous sentence with TextBlob, then finds out what a person is capable of, which is quite a lot of things, it turns out.

After some trial and error with this method, I found that once a chain gets going past the 3rd generation, it can run indefinitely. At least, the 55-concept chains that I found I needed to get to my target word length all completed pretty easily, with each generation successfully finding many more than the maximum of 50 new chains to add.

With a little tense and number manipulation via pattern.en, I can turn these chains into sentences, and walking my backward through the chain produces a coherent cumulative chapter.

At this point, I’m pretty happy with the results, and even though its a bit repetitive, it gets stuck in some interesting loops like “teacher <-> student” and “parent <-> child”. Here’s a sample of just the text output in a 55-concept chain:

This is the RABBIT that bit the HAND that cupped the FLOWER that delighted the PERSON that forgot to feed the CAT that slept in the SUN that dried the WET PERSON that tasted the FOOD that pleased the HUNGRY PERSON that controlled the COMPUTER that taught lessons to the STUDENT that thanked the TEACHER that called on the STUDENT that payed the TEACHER that spanked the STUDENT that minded the TEACHER that guided the STUDENT that fooled the TEACHER that circled the MISTAKE that cost the MONEY that furthered the FORTUNE that brought worry and the STRESS that caused mental the PAIN that provided information to the PERSON that kissed the DOG that marked the TREE that shaded the CAR that killed the PERSON that boxed up the GIFT that pleased the PERSON that gave the FRIEND that shouldered the PERSON that owned the CAR that pulled the BOAT that went on the WATER that doused the FIRE that hurt the PERSON that reported the CRIME that angered the PERSON that feed the HORSE that carried the PERSON that held the FORK AND KNIFE that cut the MAN ‘S HAND that cupped the WATER that shorted the ELECTRICITY that killed the PERSON that armed the NUCLEAR WEAPON that hurt the PERSON that desired to have the SEX that delighted nearly the ANYONE that called on the FRIEND that needled the PERSON that needed the NEW CAR that killed the DOG that hunted the CAT that stayed inside the HOUSE that Alex built.

Illustrating the Book

This is a children’s book, after all, so I needed a way to make it illustrated. For this, I found The Noun Project¬†provided the perfect way to match words with images. I didn’t bother trying to illustrate the way concepts relate to each other, relying simply on random placement of the images to convey some kind of meaning.

As it turns out, the Noun Project has some pretty amazingly specific concepts in it, and with the repetitiveness inherent to my text, it was a pretty good bet that I’d find an image for every noun. Still, I built in some generic fallbacks like “object”, and in place of “person”, because it shows up so frequently, I had it choose more specific types of person like “child”.

The Noun Project has an API, with a generous rate limit of 5000 requests per month on the free plan. And even though that’s a pretty high limit, I still added a caching element and overall I still haven’t exceeded 10% of my limit while testing and generating.

Here’s the code that chooses an icon, crops out the credit (which will be added back later), and colors the PNG according to the present chapter’s palette. That get_icon code returns the HTML for the chosen icon, and that HTML gets added to both the recto and verso page for the current concept.

Here, Imagemagick does the cropping and coloring parts, and it also powers the transformations for the recto pages’ backgrounds. As usual with ImageMagick, I’ve found Fred’s ImageMagick Scripts really useful, and with some experimentation, I got some pretty good results with the watercolor

My code gets a house image from Flickr, crops it appropriately, and runs it through watercolor. Overlaying it with a mostly-transparent watercolor paper texture helped a lot, which I found easiest to do with HTML.

This decision to work with HTML for layout presented a few problems when I tried to generate the PDF. I first tried this with pdfkit, and while it worked okay, it was very slow. I could live with that, but there were some annoying issues where the output would be super duper small, which I resolved by cranking the DPI way up; this made the file size very large, and was inconsistent. Some research suggests that this could have been related to the Retina display on my Mac, but I’m not sure.

Fortunately, Weasyprint worked a lot faster and better. I recommend it!


My work in putting this book together is really just gluing together things that other people have done, so it’s important that those contributors get credit. Each of the three sources I used — Flickr, ConceptNet, and The Noun Project — makes the credentials of their contributors visible in the API, so I just had to add in a call to each of those so that I can build a credits object as it goes along.

In a typical run — which will include some re-use of icons as well as some ConceptNet contributors with multiple credits — these credits will identify about 220 icon creators and 250 concept contributors. Pretty cool!

Reading It

This is the COMPUTER that taught lessons to the STUDENT that thanked the TEACHER that called on the STUDENT that payed the TEACHER that spanked the STUDENT that minded the TEACHER that guided the STUDENT that fooled the TEACHER …

Even though I didn’t plan on this, it turns out that most of the words that will reliably chain together tend to simple nouns with easy to parse relationships. There were only a few instances of “SEX”, and the occasional relationship that can be read as innuendo (as in “spanked” above).

Ultimately, my goals with this project were 1) to get something done and 2) to learn more about Python. I don’t know if anyone else will enjoy this book as much as I do, but I definitely accomplished my basic goals.