Creating an Anki flashcard deck from Twitter feed to learn sign language

Creating an Anki flashcard deck from Twitter feed to learn sign language

TLDR: Twitter feed sign language signs in Anki:


The problem

Right, so I’m trying to learn some sign language. For anyone who’s not read the blog before: my hearing is really bad. So now I’m trying to get ahead of it getting worse, just in case. I’m going to do a course at the local uni next year but before that I figured it’d be nice to memorise some vocab. The great thing about being able to program is that you can improve your life in lots of little ways with some imagination and some help from Google.

Memorisation (Anki)

I’ve mentioned before that a really useful skill for devs is the ability to memorise. There are loads of techniques but something like a new language (spoken or sign) is a perfect use case for spaced repetition. I use Anki for my spaced repetition needs so I need to get some signs in there in a new deck so that I can start learning.

If you’re not familiar with spaced repetition, it’s a technique based on research that shows the best time to be retain something you’re trying to learn is to recall it just before you forget it. Then, each time you try to recall from memory you’ll be able to go a lot longer before needing to be reminded again.

Anki flashcards are similar to physical flashcards in that there is a front and a back. On one side you’ll put a prompt, e.g. a word to learn, and on the other you’ll put the “answer”, e.g. the corresponding sign for that word.

The solution

So the steps to get what I need are fairly simple:

  1. Get British Sign Language signs from somewhere
  2. Iterate over the signs
  3. Get the word and store it
  4. Get the sign and store it with the word
  5. The storage format should be importable by Anki

A quick bit of research and we’ve got a wealth of signs from a great Twitter feed by British Sign! I also know from reading somewhere before that Anki can import CSVs and you can even ref images if you add them to your Anki media folder so that’s the unknowns from steps 1 and 5 sorted. Like any programming task, once the steps are clear it’s a fairly trivial task to implement.


Python (3) is the logical choice here. It’s quick and easy and I don’t need anything sexy. Just need it to do its job. So here’s the code with a few comments:

import twitter
import re
import csv
import urllib.request

# The Twitter API is awesome. First you have to get your 
# API keys (removed my actual ones!) and create an instance of 
# the client
api = twitter.Api(consumer_key='MY_CONSUMER_KEY',

# Searching the last 1k tweets will be plenty 
t = api.GetUserTimeline(screen_name="BritishSignBSL", count=1000)
tweets = [i.AsDict() for i in t]
with open('bsl_signs.csv', 'w', newline='') as csvfile:
    signwriter = csv.writer(csvfile, delimiter=',')
    for t in tweets:
        tweet_text = t['text']

# The sign of the day tweets always have the same format so Regex 
# is handy for getting the word
        m ='sign is: (.+?) - http', tweet_text)
        if m:
            english_word =
            if 'media' in t:
                media = t['media']
                image_url = media[0]['media_url']
                filename = 'bsl-' + 
                english_word.replace(' ', '-').lower() + '.jpg'

# Downloading the image. Once copied to Anki's 
# folder you just need to ref them in the CSV with img src
                urllib.request.urlretrieve(image_url, filename)

# The important line. Write a row to the CSV, first column is 
# the word pulled from the tweet and second is the ref to the image
                f'<img src="{filename}" />'])

Copy the downloaded images to the folder for Anki and import the CSV and you’re done. I can run the script again in a few weeks/months to add new signs that get released but plenty to learn in the meantime!

Thanks to British Sign ( for kindly agreeing to me using their content.

Deaf in the software industry

Deaf in the software industry

Deaf insights

Not a technical post, but hopefully still an interesting one. I want to take you through a quick tour of some things I’ve experienced (good and bad) as a software engineer, specifically from the perspective of someone who has drastically lost their ability to hear over the last few years. NOTE: I’ll use the audiological term deaf in this post but it’s worth noting that there are differences between deaf, Deaf and Hard Of Hearing.

My hearing

A lot of this post is going to apply to others but some bits might be more significant, depending on the person’s hearing levels. For reference, my hearing (or lack of!) consists of high frequency deafness, which I’ve had as long as I can remember, as well as severe loss in the other frequencies caused by Ménière’s disease. I lost around 90% of hearing in my right ear about 5 years ago and had a similar loss in my left over the past year. Since then I’ve managed to gain a fraction back in both ears but I still require hearing aids on both sides, with supplemental lip reading, to maintain a conversation.


Anyone familiar with “Agile” development will be aware of standups, where everyone in a team gathers to quickly chat about what they’re working on and any obstacles they might be facing. Open communication is key in the Agile world. Unfortunately when you’re deaf, hearing 5-10 other people in a meeting room, which might have funny acoustics and/or AC on, can be tricky. I’m lucky that I have an understanding team who will take care to make me aware of what they’re talking about but sometimes even that isn’t enough.

For any deaf SEs out there, I recommend firstly making your team very aware of what level your hearing is at. Secondly, just like with normal group conversations, make sure to find a position in the room that facilitates hearing what you need to hear – this might require standing where you can see the most faces or perhaps beside the quieter speakers. If I’m having a particularly low hearing day and I can’t hear someone who’s talking to me directly then I’ll just walk over to them and ask them to repeat what they said.

For the hearing SEs reading this, if there is someone in your team who is deaf, be sure to make yourself seen as well as heard. Speak clearly and loudly. Move closer to your other team members if necessary.

Call me maybe

Standups aren’t the only meetings we have to deal with. Like it or not, they’re a fact of software life. If the company you work for is global, then you’ll also have the dreaded conference calls. Luckily, compared to a lot of industries, we have tools that make it way easier for deaf people to participate. GoToMeeting software or other video conferencing means there can be a visual element to calls. Slack or similar chat programs are ubiquitous in most development shops. For me, I couldn’t live without both of these. I occasionally have to attend calls that are audio only but they really zap my energy because I have to spend a lot of effort just trying to extract the meaning out of what’s being said. With a video conference, where there are slides, or a demo, I don’t have to focus quite as hard on the audio as the visual cues will help cover what I miss. Chat software like Slack solves the problem in a slightly different way – it allows me to gather some pre meeting info from one or more participants, so again I can expend energy more efficiently during the meeting itself.

Quick side note here for any recruiters out there. Try to be conscious of candidates that prefer written or face to face communication over phone calls. I love my job and don’t foresee any change of job in the near future but if I were invited for a phone screen or told that I could only find information on a job via phone, both of which were fairly common when I was on the job hunt, then I’d potentially be put off because I know I might miss out on important information.

Syntactic sugar

The company I work for has a great cutting edge tech stack and is always willing to use a new tool if it will solve a problem better than the existing offerings. This is awesome! I can learn new stuff and it helps me do my job better. However, being deaf, new terms or methods can cause a bit of a headache. If I’m chatting to my coworkers and someone suggests some cool AWS gizmo to store our data more robustly, I’ll make a mental note to do a quick review of any domain specific terms once the conversation is finished. This helps me as an engineer because I can do some investigating into whether I think it’s a worthwhile move but it helps me more as a deaf person. How? Next time that tech, or the terms that surround it, come up in conversation I won’t have to do as much mental gymnastics to decipher what’s being said and I can just concentrate on the content of the conversation.

Deaf not dumb

Related to the last point, I’ll often make people “explain like I’m 5” when I want to be absolutely clear of the point they’re making. I’m pretty methodical at times so my colleagues need to be quite patient but in the end everyone benefits. I’ve been developing long enough to know that one of the most lethal blows to a software project comes in the form of assumptions. In my opinion, clarity is paramount.

Music to my ears

Being deaf definitely isn’t bad; it has some surprising upsides. One great thing is that I don’t need to stick headphones on to get in the zone. The hustle and bustle of a normal workplace can easily be ignored when it barely registers above silence. You could argue that this means I might look more approachable and therefore might get more interruptions but I find I’m more productive if I take occasional breaks from my screen anyway.

1 on 1

Sometimes the trickiest encounters are the simplest for most people. Ever pair program? It’s a great process for working through a non trivial task. Unfortunately, it can be a bit more work when deaf. If someone else is driving then they’re probably going to be looking at the screen, meaning their voice isn’t carrying towards me and I can’t see their mouth properly to get a bit of lip reading in. If I’m driving then I’m probably looking at the screen, meaning I have to keep turning back and forth to hear what my co-pilot is saying. I still encourage it in my team and try to pair program when it suits because I feel overall the pros outweigh the cons.

Programmers have this reputation of being socially awkward. I’ve found the opposite to be true in the majority of cases, though maybe I’ve been lucky. With that said, I do find a lot of people in the industry quite soft spoken. If you’re one of those people, please make an extra effort when speaking to someone like me in future. Also, try to recognise when someone has an issue communicating, in whatever way. Everyone in my work knows I’m the deaf guy, so they’ll really go the extra mile to communicate, but occasionally someone from a different office will travel over and there can be an initial fumbling through a few conversations until it sinks in that I’m not kidding when I say I’m deaf.


I’m convinced that lack of hearing is not a disadvantage but it obviously does provide unique challenges in life. Hopefully this post has shed a little light on the subject. Reading this back I’m wary of painting a negative picture. This industry is really great for deaf people, especially if you’re surrounded by the right people in a supportive company.