About

I really enjoy This American Life (duh), and many years ago I got excited about creating my own little archive in rebellion against the iTunes podcast subscription service at the time. I was a budding web developer and hungry for personal projects that were interesting and helped me learn something new along the way. When I found out that the current This American Life episode could be downloaded from their website for free, I knew it was time to write some code and automate this shit. Ruby to the rescue!

My first attempt at all this was a simple little Ruby on Rails application, with an /import end-point that would check the .org and see if new episodes were available. If so, it would use Nokogiri to scrape out the necessary meta data (number, title, description, date, image url, mp3 url) and save it all to a database, then use the image and mp3 urls to save copies of those files in an s3 bucket. I snagged this sick domain, and I was GOOD TO GO!

The problem is, website scraping is notoriously unreliable, design elements change, paths break, and I was having a weird issue saving the mp3 files. It was clunky to say the least. I got sick of dealing with the little bugs that I couldn't figure out, and the project went to the backburner. Time went by. I let the domain expire.

No big deal. Moving on.

These days I've been having lots of fun with Jekyll and Github Pages. I started using it for my band about a decade ago, it was the obvious choice for the roller disco in town, and it's been super fun translating some of my small database-driven Ruby on Rails projects over to the wonderful world of files on disk and front-matter! Like, this frickin' thing has pagination, a pretty good little search bar, and it's all achieved with basic, mostly built-in, functionality.

Very cool. BIG FAN.

Admittedly, I do miss out on some of the dynamic and/or automated aspects of a db-driven site. But with the help of a new and improved ruby import script, super handy GitHub Actions cron job scheduling, and my new favorite cloud storage provider, it feels like much less work in the end. And more fun! I get to use ruby in short bursts, from a couple different angles, but the end product is still just static files served up on a simple web server. These days I'm asking myself, does this project really need an always-on database process running?!

Also, it's free. ;)

There's an API?!

I use the label "API" very loosely here. I think more accurately this section should probably be called "End-Point" because that's all you get: one dumb end-point.

[{
  "number": 0,
  "why": "so that array[1] will return episode 1, array[2] returns episode 2, etc"
},{
  "number": 1,
  "date": "1995-11-17",
  "title": "New Beginnings",
  "description": "Our program's very first broadcast.",
  "image_url": "https://assets.thisamericanlife.co/images/0001.jpg",
  "audio_url": "https://assets.thisamericanlife.co/audios/0001.mp3",
  "url": "https://thisamericanlife.co/episodes/0001"
},{
  "number": 2,
  "date": "1995-11-24",
  "title": "Small Scale Sin",
  "description": "Small-scale stories on the nature of small-scale sin.",
  "image_url": "https://assets.thisamericanlife.co/images/0002.jpg",
  "audio_url": "https://assets.thisamericanlife.co/audios/0002.mp3",
  "url": "https://thisamericanlife.co/episodes/0002"
},{
  "number": 3,
  "date": "1995-12-01",
  "title": "Poultry Slam 1995",
  "description": "Stories decrying the wonders of turkeys, chickens, and other fowl.",
  "image_url": "https://assets.thisamericanlife.co/images/0003.jpg",
  "audio_url": "https://assets.thisamericanlife.co/audios/0003.mp3",
  "url": "https://thisamericanlife.co/episodes/0003"
},...]
  

It's literally just every episode in one big dumb dump, but it is valid json and can be parsed as such. Eventually, I want to add a couple url params for search, pagination, and probably single episodes. We'll see. Turns out, I think this shit is fun. What a world!

Hope you enjoy,

Eli 🤓