In the current implementation, blogs are fetched by the client like so:
const res = await fetch("https://www.bunnieabc.com/index.xml"); const xmlText = await res.text();
we will push fetching to be handled by the server. In the current implementation, we extract the link, description title, and time.
As such, this results in Cors errors and relying on different data from different sources, Ie: to retrieve the author of the article and the hero image. We'd need to use the link to fetch the whole article and finally append the data while tracking indexes.
Logging this would result in this :
{console.log( "Article Data:", item.title, item.link, item.pubDate, item.description )}
Result
Article Data:
Anywhere on Earth in Under an Hour: SpaceX's Plan to Revolutionize Transportation
https://bunnieabc.com/post/anywhere-on-earth-in-under-an-hour-spacexs-plan-to-revolutionize-transportation/
Mon, 30 Oct 2023 11:05:07 +0300
SpaceX is developing a new rocket that could revolutionize transportation, making it possible to travel from anywhere on Earth in under an hour. Learn more about SpaceX's plan to make point-to-point travel a reality.
Blogs.js:141
Article Data:
Privacy-preserving computing: The future of data security?
https://bunnieabc.com/post/privacy-preserving-computing-the-future-of-data-security/
Thu, 26 Oct 2023 15:41:36 +0300
How to protect your data from data breaches and big tech firms. This article explores privacy-preserving techniques such as >homomorphic encryption, zero-knowledge proofs, data minimization, blockchain technology, and federated learning. It also >discusses the future of data and the role of government oversight.
Blogs.js:141
Article Data:
Web 2.0 vs Web 3.0: The Future of the Internet
https://bunnieabc.com/post/web-2-0-vs-web-3-0-the-future-of-the-internet/
Mon, 23 Oct 2023 19:36:28 +0300
Web 3.0 is the next generation of the internet, and it has the potential to revolutionize the way we use it. Web 3.0 apps are built on
blockchain technology, which makes them more decentralized, secure, and user-controlled. In this blog post, we compare Web 2.0 and Web 3.0 and highlight some of the most exciting Web 3.0 apps that are currently in
development.
this data is incomplete, we'd need to make another request this time using individual links.
An optimal way to do this would be to fetch all links, and then retrieve all other necessary data. Since the article brings with it all the data we would need. An image, and author are the data we are missing at this point
Moving this to server components I believe would clear all errors.
In the current implementation, blogs are fetched by the client like so:
const res = await fetch("https://www.bunnieabc.com/index.xml"); const xmlText = await res.text();
we will push fetching to be handled by the server. In the current implementation, we extract the link, description title, and time.
As such, this results in Cors errors and relying on different data from different sources, Ie: to retrieve the author of the article and the hero image. We'd need to use the link to fetch the whole article and finally append the data while tracking indexes.
Logging this would result in this :
{console.log( "Article Data:", item.title, item.link, item.pubDate, item.description )}
Result
this data is incomplete, we'd need to make another request this time using individual links.
An optimal way to do this would be to fetch all links, and then retrieve all other necessary data. Since the article brings with it all the data we would need. An image, and author are the data we are missing at this point