Building a BorgesBot

Borges never went so far as to write about artificially generated prose, but there is something Borgesian about the concept. That’s why I think my newest foray into computer coding is particularly appropriate. It’s a program that generates fake Borges stories, like this one:

Según secreto

Erlord , para simular que fue un hecho para percibir en el sótano ? Lo amarró encima de un dado. Ashe lo dejó en el hemisferio boreal ( de nashville , tennessee) uno de los palacios, pero por una línea razonable o una justa noticia habrá millones de actos deleitables o atroces; naturalmente , ese adjetivo numeral vale por infinitos.

El examen lo satisfizo . Por increíble que parezca , yo tenía que hacer en la hora de su casa le reveló que también era una apariencia , que varían el número de los catorce nombres que figuraban en el antiguo lecho de un batiente recién pintado que alguien sea capaz de nombrar el olvidado premio , pero el jefe le ordena. La azarosa crónica de los yahoos es precaria; esa vinculación , en el mundo real .

The story is nonsense, of course, but it’s written in a language that is particularly Borgesian. How does it work? For anyone who knows about coding, the process is decidedly amateur. I input text from Borges stories (I’ve been using the stories on Ciudad Seva). The program reads the stories and builds an extremely simple language model – called a trigram model because it groups words into sets of three and finds patterns among them. It then uses that model to output artificial text.

What can we do with it? So far, all I’ve done is generate gibberish. The Borges Bot is my most proud achievement, but my model works for any author. Here, for example, are three sentences based on three different translations of Don Quixote (thanks to project Gutenberg):

John Ormsby (1605): For us to let myself be handled by all the women and make it so happened, but do not expect any prize, and rocinante , for having transgressed the laws of the vessel alive , even though they did so the judge, for god, I would not go, Leonela, as there was nothing that is all nonsense, models of folly; though every time I argued the matte , since I saw his antagonist ‘s lance, filled with amazement at the conversation of the message .

Peter Anthony Motteux (1712): “I have him at Saragossa , ” replied Don Quixote ‘s weak points , to perform his promise .

Arvid Paulson (1922): The Duke had had a crazy man before them and inquired what all the time , and when he heard the little boys in the woods that they had not interfered and made to his advantage to consent to prescribe a good omen ?

Aside from being hilarious, these sentences don’t offer much in the way of literature or scholarship. (They are used, most frequently, to produce those weird e-mails that appear every so often in your spam folder.) When run backwards, on the other hand, the models are more useful. If I input a sentence of Don Quixote into my model, for example, it can tell me which translator is most likely to have written it — a useful tool when authorship is at doubt. This is why, as I read in Humanities recently, one of the first computer-based literary projects was the attribution of the anonymous Federalist Papers.

With the help of my collaborator Dan Garrette, from the computer science department at UT, I am also hoping I can use this kind of modelling to better understand the process of translation.

For now though, it’s mostly about generating fake text. Is there an author you want modeled? Send a text file of their work to, and I will post a fake story by your favorite writer.

[note: the image accompanying this post is from the manuscript of “Viejo hábito argentino,”  part of the Borges collection at the Albert and Shirley Small Special Collections Library at UVA.]

About Hannah Alpert-Abrams

Hannah Alpert-Abrams[avatar by Neely Goniodsky]

Hannah Alpert-Abrams