Closed flamedfury closed 7 months ago
Hi @flamedfury!
Thanks for reaching out, I'll get back to you with a deeper explanation soon.
Thanks, looking forward to it 😁
Heya @lwojcik checking in to see if you had a chance to look at this 😊
Okay, let's do this. I apologize I kept you waiting for so long.
Is this allowed as part of your project? No worries if this isn't allowed.
Starting from version 1.3.6 this project is licensed as public domain - you can do whatever you want with it, with or without credit, commercially or not, enjoy. 😄 Older versions are under MIT license (i.e. you can also do whatever you want as long as you provide a license, but let's be honest, I have better things to do in life than checking if everyone who pulls / forks this project obeys license requirements.
Can you help me understand the core parts for aggregating the multiple feeds?
Yes.
The core parts of the project are as follows:
sites
is created - see content/sites
directory. They're also described in the README.content/_data/siteConfig.js
,.eleventy.js
file, which contains all necessary logic for the aggregator to run.Let's have a look at .eleventy.js
.
The site declares two Eleventy collections: articles
and sites
.
The sites
collection fetches all items from content/sites/
and sorts them alphabetically.
Apart from that, inside this collection declaration, all site avatars (favicons) are fetched and saved locally in the project folder (that way we don't hotlink to original files, and site owners don't yell at us for wasting their bandwidth). This collection is now consumable anywhere in the project, but to see it in action, you want to see content/_includes/bloglist.njk
and content/_includes/bloglist-item.njk
. This is what you see on the Sites subpage.
The articles
collection is a little more complicated.
Step by step, it does the following things:
sites
collection,feedData
variable) using @extractus/feed-extractor
library,parsedFeedData
and feedContent
variables)return
statement). In the meantime, the array is sorted by publication date and sliced so a limited number of articles (configurable in siteConfig.js
) is fetched from each feed.Finally, the articles
collection is consumed in content/index.njk
file. Additionally, post pagination is applied in the template frontmatter. Each post displayed on the site is formatted in a consistent way as specified in _includes/partials/post.njk
.
This is more or less the most important thing to understand about the aggregating logic. It's not the most readable or cleanest code (these days I'd probably try achieving the same effect with global data files for better readability) but it gets the job done.
Please let me know if you have any further questions, I'll be happy to explain.
Fantastic, I can't wait to dive into this over the weekend! Thanks for putting the time in for me!
Hello, I love this project. I've been looking at creating an aggregated feed from multiple external sources, displaying the posts in a list and creating a subscribable feed.
I want to use this in an existing 11ty project. Can you help me understand the core parts for aggregating the multiple feeds?
Is this allowed as part of your project? No worries if this isn't allowed.