Shortly after the previous Fed update post, support for running Fed with Docker was added so it's not as complicated to get started and run Fed for yourself.

Running Fed with Docker and Docker Compose.

New Pages

Downloader Controls

After going through and adding communication between threads in the last update post, there is now also a downloader page that has the start downloading button I mentioned. And not so long after adding that, I also added shareable state between the threads so that we can tell when the downloader is and isn't running.

A lot more is possible with the communication and state all set up, but for now it's just these 2 features.

The downloader page.


This one probably doesn't need much explanation. An about page for all kinds of information.

The about page.

User Experience

Styling for invalid inputs was added.

An input that contains an invalid URL.

Some logic was added so that if the downloader is already running and you click the force download button, it will now restart the downloader with force download enabled. Previously the instruction would just be ignored.

A new downloader instruction was added, namely DownloadUndownloaded. Bringing us up to 3 total:

pub enum DownloaderInstruction {

This new instruction is sent when a new feed is added on the home page or when an OPML file is imported and there are new feeds. And as the name implies, it instructs the downloader to only downloads feeds that have not been downloaded previously.

When running Fed with Docker for the first time, it could happen that PostgreSQL wasn't done initializing yet, causing Fed to crash because it couldn't establish a database connection. Now instead of crashing on the first attempt, it will try to connect 10 times waiting for 2 seconds between each attempt.

Some ~responsive design~ was added so Fed is a bit more usable on mobile.

Fed in a mobile setting.

Sometimes feeds would have items with the content set inside their description/summary instead of their body. To prevent displaying an item without any content, Fed will now use the description of the item if it finds the item's body is empty (and if the description exists).

The Holllo GitLab activity feed, using descriptions for the content.

Saving Feed Data

Up until now when you would go to a view a feed, Fed would take the feed's raw data from the database and then parse it on demand. For just viewing feeds, this works fine. But because I have more complex features planned, such as various ways of filtering and sorting, we'll have to save the feed data directly.

This is quite a big task as it requires adding a bunch of new stuff to the database, as well as rewriting a large part of the feed viewing page logic. So to make the transition a bit simpler, I've slowly started saving feed data, and at the moment these are the things that are saved:

pub struct FeedData {
  /// The feed id this feed data belongs to (foreign key).
  pub feed_id: i32,
  /// The feed data id, auto-generated by PostgreSQL (primary key).
  pub id: i32,
  /// The unique ID generated by the feed (or feed_rs).
  pub unique_id: String,
  /// The type of feed (Atom, JSON, RSS...).
  pub feed_type: FeedType,
  /// The title of the feed.
  pub title: Option<String>,
  /// The time the feed was last modified in a significant way.
  pub updated: Option<i64>,
  /// The time the feed was last published.
  pub published: Option<i64>,
  /// The description of the feed.
  pub description: Option<String>,

Most of this isn't too particularly interesting, but since we're now saving this data anyways, I figured being able to see it somewhere would be nice too. So the edit and feed list pages were modified slightly to incorporate it.

Saved feed data from the Mozilla blogs.

There's a lot more data to add, specifically a bunch more metadata and most importantly the feed's items, so that's next. And once that's done, some filtering and sorting should be relatively simple to add.


If you're like me and have over 500 feeds saved, sometimes an error occurs with a few of them, whether the URL is no longer correct or something goes wrong parsing, anything could happen. So to deal with bad feeds like that, a new interface was added to the feed list that will show you why a feed is misbehaving.

Showing the error when a feed failed parsing.

To deal with these bad feeds, you are given 3 options:

There are still some things I'd like to add to this bad feeds list, such as an "Ignore All" button, since it could happen that your internet disconnects and then all your feeds fail to download, even though you know they should work. But for now this works great.

That's all for this update, thanks for reading!