Happy New 2023!

I spent the morning of January 1st working on this little animation. It uses all the things that I have learned in the past week while following LoveByte’s demoscene tutorial. Here is some of my work during the tutorial, maybe you can see where I took a trick or two?

2022 was not a great year. Not terrible, not great. Well, maybe a little bit bad. It felt like an year of missed opportunities and wasted work. By the end of the year I felt I could have done so much better. I guess one big example is my naturalization process, which is still paralyzed, due to extreme bouts of procrastination. I did try to improve myself, but without a lot of success.

There were also nice beginnings. I joined the Fediverse in April, which was quite fun. I went to an international conference in December, which was awesome. I played some nice games. But when I try to remember what happened, I can only feel a sense of fog in my head.

So I guess my goal for this year is to battle this brain fog. To try and be more careful about the things that I do, to think about them, to reflect and write them down. And in the meanwhile, to try to figure out what is really important and focus on that.

And if all that fails, I can just fall back to try and write more cool pico8 carts.

20, 15, 10, 5 year retrospective by Cory Doctorow

Cory Doctorow published an end-of-year retrospective in his blog, where he put links from 20, 15, 10, 5 and 1 year ago, from between December 24th and January 7th.

Actually, he puts these back links in every post he writes (which is really cool). But the longer time period of the retrospective post makes it possible to get a pulse of these different years. Here I did a small selection of the link titles that resonated with me. (Go to his blog if you want the actual links)

20 years ago: 2003

The year I first arrived in Japan, and the internet was waking up. The music industry still thinks it can win the fight against piracy, and people are writing new blogs everywhere.

  • #20yrsago The music industry STILL owes you $20!
  • #20yrsago New metaverse game launches
  • #20yrsago William Gibson has a blog

15 years ago: 2008

I had just finished my master degree, and was playing Mafia online like there was no tomorrow. Other than that, my mind is quite fuzzy. Cory’s links are also all over the place (Still music industry stuff, though).

  • #15yrsago From Nazi collaborator to Fortune 500 – companies that got rich on the Reich
  • #15yrsago History of guerrilla knitting at 24th Chaos Communication Congress
  • #15yrsago Motley Fool: litigation isn’t a business model, sell RIAA short
  • #15yrsago Resigning from Napster takes more than 30 minutes
  • #15yrsago One-armed man arrested in Belarus for clapping

10 yeas ago: 2013

I had been in Tsukuba for 2 years, and everything was smooth sailing. The calm before the storm, since 2014 would bring Gamergate as a prelude to the alt-right flooding over the internet.

  • #10yrsago Dad hires in-game hitsquad to kill his son’s characters
  • #10yrsago Telcos lobby North Carolina to make community Internet illegal, then abandon the state to second-worst Internet in the country
  • #10yrsago Anatomy of a patent troll who wants $1000 from every scanner user in America: patents are totally, utterly broken
  • #10yrsago Gary Gygax explains why Christians shouldn’t celebrate Christmas

5 years ago: 2018

Smack dab into the middle of the Bolsonaro, Brexit and Trump years, everything is going to hell, everywhere. The links tell that story well.

  • #5yrsago Jeff Sessions encourages courts to continue practice of fining poor people for being too poor to pay their fines
  • #5yrsago American inequality is unequally distributed, and got lumpier after the Great Recession
  • #5yrsago 2018: the year that America’s ISPs hiked their prices
  • #5yrsago For 20+ years, Japan’s largest companies have been riddled with corruption and fraud
  • #5yrsago Hoaxer with a history of fake bomb threats SWATs and murders a random bystander over a $1.50 Call of Duty bet
  • #5yrsago Scotland is still a financial secrecy exporter, laundering billions for Russia’s crime-bosses and oligarchs

All in all

This was an interesting way to reminisce about the past 20 years. I should probably write my own end-of-the-year retrospective (at least one in my work website…)

Making a RSS feed by hand

RSSs are cool. They allow people to follow webpages. They don’t get stuck in spam filters. They give people power over the web. I wanted to create a RSS feed for the Alife newsletter. The Alife newsletter is built from hand-made python scripts that parse multiple markdown files, so I had to learn how to make RSS feed by hands from those markdown scripts.

Part 1: The RSS format.

A well formed RSS file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<title>Title of the feed</title>
<link>Link to your webpage</link>
<description>Description of your feed</description>
<title>Title of one news item</title>
<link>Link to the item</link>
<guid>Unique id</guid>
<pubDate>27 Nov 2013 15:17:32 GMT</pubDate>
<description>The information to be displayed by the reader. Could be everything.</description>

Here is the detailed RSS format specification.

Part 2: Generating the items

An item requires a timestamp, but we didn’t have anywhere to store the timestamp, so I added a release_date.dat file to each directory with the release data in yyyy-mm-dd format.

It also requires a time in GMT. Usually it is a little bit of a nightmare to check the local date-time of a server you don’t control (in this case, whatever server github uses for its continuous integration). Luckily, the newsletter is released once a month, so time is immaterial. I just put 00:00:00 GMT there.

Note that the description is empty, we are going to fix that later.

import os
from datetime import date

def getDate(edition):
dateFile = os.path.join(edition,"release_date.dat")
rdate = ""
if os.path.exists(dateFile):
with open(dateFile,"r") as f:
dateText = f.readline()
rdate = date(*[int(s) for s in dateText.split("-")])
return rdate

def makeTitle(edition):
nmb = str(int(edition[8:]))
if nmb[-1]=="1":
nmb += "st"
elif nmb[-1]=="2":
nmb += "nd"
elif nmb[-1]=="3":
nmb += "rd"
nmb += "th"
return f"The {nmb} edition of the Alife Newsletter"

def makeItem(edition):
nDate = getDate(edition)
titleDate = nDate.strftime("%B %Y")
pubDate = nDate.strftime("%d %b %Y 00:00:00 GMT")
link = "https://alife-newsletter.github.io/Newsletter/"+edition+".html"

item = []
item.append(" <title>"+makeTitle(edition)+", "+titleDate+"</title>")
item.append(" <link>"+link+"</link>")
item.append(" <guid>"+edition+"</guid>")
item.append(" <pubDate>"+pubDate+"</pubDate>")
item.append(" <description>")
item.append(" </description>")
return "\n".join(item)

Part 03 Putting the items together

Now that we can generate the items, a simple script put all the items together, and adds the header and the footer of the RSS.

rss_header = """<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<title>Artificial Life Newsletter</title>
<description>The Artificial Life Newsletter Brings you the latest alife news!</description>
rss_footer = """
with open(os.path.join("docs","RSS.xml"),"w") as f:
for e in edition_names:

Be careful! The xml file format is very particular about empty lines. In my first version of this script, there was a blank line at the beginning of the file, and my RSS reader refused to recognize the file as valid…

Part 04 Adding the content from the newsletter

The above RSS is already workable. An RSS reader will be able to see when new articles are posted, and take you to the webpage of the article.

However, one of the great benefits of an RSS reader is the ability to read the article right there, without going to the original webpage! So we will add the content of the newsletter to the feed.

This is a simple matter of stripping the HTML headers and footers from each newsletter edition. The big problem is that to include the HTML in the RSS, you need to put it inside a <![CDATA[]]> tag, like this:

def getContents(edition):
htmlFile = os.path.join("docs",edition+".html")
appender = False
contents = []
with open(htmlFile,"r") as f:
for l in f.readlines():
if "<h1 " in l:
appender = True
if "</div>" in l:
appender = False
if appender:
return "\n".join(contents)

item = []
item.append(" <item>")
item.append(" <title>"+makeTitle(edition)+", "+titleDate+"</title>")
item.append(" <link>"+link+"</link>")
item.append(" <guid>"+edition+"</guid>")
item.append(" <pubDate>"+pubDate+"</pubDate>")
item.append("<description><![CDATA[") # Open the CDATA tag
item.append(getContents(edition)) # Add the contents
item.append("]]></description>") # Close the CDATA tag
item.append(" </item>")
return "\n".join(item)

Part 05 Making the RSS discoverable

After making the script that generates the RSS feed, I need to allow people to find it in the webpage. This means creating links inside the webpage itself, as well as adding the following tag to the head area of the page:

<link rel="alternate" type="application/rss+xml" title="Alife Newsletter RSS Feed" href="https://alife-newsletter.github.io/Newsletter/RSS.xml" />

This will make a little RSS icon appear on top of the browser, indicating to visitors that the website has a RSS feed.

06 — The End

And that’s it! Soon (as soon as I get the PR approved) you can follow the Alife Newsletter without an e-mail reader, or a twitter account, from the confort of your home!

Amazing command of the day: shuf

Today I learned an amazing bash command: shuf. It shuffles the lines of an input file, and prints them to the standard output. This is super useful when you want to sample a few lines from a very large file, to have an idea of its contents. See the image below for details:

I learned about this command while I was playing with Google’s “Covid Mobility Dataset”. This is a dataset of day-to-day relative changes in visits to 5 different location categories (essential shopping, shopping, parks, transportation, workplaces), as measured in Google’s “Location History” dataset, for many cities across the world.

Originally someone suggested me this data for a research project I’m working on, involving training agents to reproduce mobility patterns from humans. After studying the data, I came to the conclusion that it will probably not be very useful for this particular project. However, this is still a fascinating dataset, and I’m sure a lot of cool things could be learned from it, if anyone took the time to look through it.

90’s Internet Nostalgia

Recently there has been a small subculture about reviving the 90ies style of the web, specially in places such as “neocities” (a geocities revival project). Today I learned about the glorious CASTLE CYBERSKULL, which takes this idea and runs with it:

The above picture does not do justice to the GIF-crazy glory of CASTLE CYBERSKULL. Do take 5 minutes to visit this gem.

To be honest, back in the 90ies I kinda hated this aesthetic. My own website back then was an ugly affair with solid mint background and black letters and that was about it. But now the whole thing is nostalgic, and reminds me of simpler times.

By the way, the links section of CASTLE CYBERSKULL has a fantastic link to a page that loads random Youtube videos of 90ies hits. You can tell I spent a good part of my evening today surfing the nostalgia here.

February 2023