ddev / ddev.com

Astro source code used to generate the static, public ddev.com site.
https://ddev.com
Apache License 2.0
11 stars 18 forks source link

Update WordPress blog author descriptions #35

Closed mattstein closed 1 year ago

mattstein commented 1 year ago

Descriptions are pulled from WordPress and rendered as Markdown. They could use some minor refreshing and link-fixing:

mayankguptadotcom commented 1 year ago

@mattstein - I wasn't able to create a new issue, and this seemed like the closest issue to me to discuss the source of the blog. I think the true purpose of Astro site would be to ensure that we remove WP from the equation. Using Markdown or MDX (if possible use of Astro m2dx) to write blog posts - it'll help in managing and publishing the content easy and of course will be version controlled.

Thoughts?

mattstein commented 1 year ago

I think the true purpose of Astro site would be to ensure that we remove WP from the equation.

I failed to articulate this in the project overview, but my goal has been to reduce the surface area of WordPress without getting rid of it so that we still have a Platform.sh project for dogfooding. I’ve been working in this direction, but happy to change if @rfay (or anyone else) thinks we’d be better off getting rid of the WordPress install.

For me, the true purpose of the Astro site is to decouple the front end so it’s easier for people to contribute and easier to host.

Markdown or MDX

MDX is trickier right now because you can’t easily get its rendered content from Astro, like you can with Markdown. So no including it in RSS feeds or syndication. Maybe not a deal-killer, but I’ve been avoiding it where full-content RSS feeds are concerned.

rfay commented 1 year ago

My constraints are:

It seems like we're well on the way to these with the current project, and I'm great with that. Using Platform.sh for dogfooding isn't a goal, although I'm sure I can learn a thing or two this way, but it could be there regardless.

It seems like we're doing great, thanks so much!

mattstein commented 1 year ago

I’m blocked on the WordPress end, and it seems we’d all secretly prefer @mayankguptadotcom’s suggestion of using Markdown and doing away with the WordPress install altogether. Sorry for the about-face on this, but I’ll migrate the existing blog posts to Markdown and update the Astro build accordingly. More work at the moment, but surely better in the long run and easier to maintain.

rfay commented 1 year ago

Sounds great.

mattstein commented 1 year ago

Done in f54e012c2dc911d68b04a66598829caec562fc70.

mayankguptadotcom commented 1 year ago

@mattstein - No worries at all, we all are trying to do what's best for DDEV and difference of opinion is always helpful. Today is better for me, I can easily convert the existing blog posts into markdown and commit it, if that helps you in reducing the workload.

mattstein commented 1 year ago

No need @mayankguptadotcom, but thank you! I migrated the posts yesterday and it went quicker than I thought.

rfay commented 1 year ago

Amazing. But I see you did piles of those that didn't need to be migrated. Old release announcements and old puffs for Ultimike's DDEV class...

mattstein commented 1 year ago

It was automated, and I'll always treat content with care and omit anything we agree shouldn't be there as a follow-up step. All I did was remove dead newsletter signup links.

I assume there's plenty of cruft that should have already been pruned, but also that it's not solely up to me to decide what's important.

rfay commented 1 year ago

Love your respect! Thanks. Glad it was automated!

mattstein commented 1 year ago

I have an issue for content cleanup over here, if anybody’s got strong feelings or guidance to offer.

rfay commented 1 year ago

Could you please make a note about what you did to automate this and what tools you used? I've had to do this from time to time (you'll note that some of these got migrated into the DDEV docs for example). Would love to have a good record of a good technique.

mattstein commented 1 year ago

@rfay Sure! I don’t know how useful it’ll be, because the content was coming from GraphQL and this was absolutely a quick-and-dirty thing I briefly committed (on purpose) in f54e012c2dc911d68b04a66598829caec562fc70 and ae0cf7b66d29b78bc7507cf31e74375d6194495f. That temporary stuff has since been removed.

In each case, where I was already making a GraphQL call to fetch data, I wedged in some hamfisted code that would write that data to a file.

I needed to save author information, for example, so I put it in a JSON blob. I started with this existing method:

export async function getAllBlogPostAuthors() {
  const data = await fetchAPI(`
    {
      users(where: {hasPublishedPosts: POST}) {
        nodes {
          name
          slug
          firstName
          description
          posts(where: {status: PUBLISH}) {
            nodes {
              id
            }
          }
          avatar {
            url
          }
        }
      }
    }
  `)

  return data?.users
}

...and then collected the details I wanted and temporarily wrote them to cache/authors.json before returning the data for the site:

export async function getAllBlogPostAuthors() {
  const data = await fetchAPI(`
    {
      users(where: {hasPublishedPosts: POST}) {
        nodes {
          name
          slug
          firstName
          description
          posts(where: {status: PUBLISH}) {
            nodes {
              id
            }
          }
          avatar {
            url
          }
        }
      }
    }
  `)

  const nodes = data.users.nodes;
  const dir = path.resolve('./' + DEVELOPMENT_CACHE_DIR)
  const authorData = nodes.map((node) => {
    return {
      name: node.name,
      firstName: node.firstName,
      description: node.description,
      slug: node.slug,
      avatarUrl: node.avatar.url,
    }
  })

  if (!fs2.existsSync(dir)) {
    fs2.mkdirSync(dir);
  }

  let filePath = dir + '/' + 'authors.json'

  fs2.writeFileSync(filePath, JSON.stringify(authorData))

  return data?.users
}

The result now lives here.

The posts used the same hackery, but were a little more involved. I started with the method that grabs all the blog posts via GraphQL:

export async function getAllBlogPosts() {
   const data = await fetchAPI(`
     {
       posts(first: 1000) {
         edges {
           node {
             title
             slug
             featuredImage {
               node {
                 sourceUrl
               }
             }
             author {
               node {
                 name
                 avatar {
                   url
                 }
               }
             }
             date
             content
             categories {
               nodes {
                 name
                 slug
               }
             }
           }
         }
       }
     }
     `)

   return data?.posts
 }

I added the node-html-markdown package knowing I’d need to convert HTML content into Markdown. Instead of writing a JSON blob this time, I wrote out individual [slug].md files including exactly the frontmatter we’ll need:

export async function getAllBlogPosts() {
   const data = await fetchAPI(`
     {
       posts(first: 1000) {
         edges {
           node {
             title
             slug
             featuredImage {
               node {
                 sourceUrl
               }
             }
             author {
               node {
                 name
                 avatar {
                   url
                 }
               }
             }
             date
             content
             categories {
               nodes {
                 name
                 slug
               }
             }
           }
         }
       }
     }
     `)

   const edges = data.posts.edges;
   const dir = path.resolve('./' + DEVELOPMENT_CACHE_DIR + '/posts')

   if (!fs2.existsSync(dir)) {
     fs2.mkdirSync(dir);
   }

   edges.forEach(({ node }) => {
     let filename = node.slug + '.md'
     let filePath = dir + '/' + filename
     let contents = `---
 title: "${node.title}"
 pubDate: ${node.date.split("T")[0]}
 author: ${node.author.node.name}
 featuredImage: ${node.featuredImage?.node.sourceUrl}
 categories:${node.categories.nodes.map((node) => {
   return `\n  - ${node.name}`
 })}
 ---
 ${NodeHtmlMarkdown.translate(node.content)}
 `
     fs2.writeFileSync(filePath, contents)
   })

   return data?.posts
 }

pubDate is something Astro’s happiest with, categories is an array where the first item is the most important, and author is the person’s full name that should (and does in this case) have a corresponding object in that authors.json file we started with. (There may be a better way to handle this, but I’m not sure.)

The Markdown file template should pretty much speak for itself here:

---
title: "${node.title}"
pubDate: ${node.date.split("T")[0]}
author: ${node.author.node.name}
featuredImage: ${node.featuredImage?.node.sourceUrl}
categories:${node.categories.nodes.map((node) => {
  return `\n  - ${node.name}`
})}
---
${NodeHtmlMarkdown.translate(node.content)}

I moved those files into src/content/blog/, where Astro automatically creates routes for .md and .astro files at build time. In ae0cf7b66d29b78bc7507cf31e74375d6194495f I refactored everything to query and use this new local content instead of the GraphQL source. This is mostly a matter of using Astro.glob() all over the place to “query” the posts needed.

A typical example of this is the blog post index page at src/pages/blog/index.astro, where I replaced some GraphQL fetching and data manipulation with Astro.glob() results I sorted by date and limited to six: https://github.com/drud/ddev.com-front-end/commit/ae0cf7b66d29b78bc7507cf31e74375d6194495f#diff-a64eb99d116304a8bedfb2764ef2708e4ede269e512028d3eca22ba03651153a

Things get slightly more interesting where we need to dynamically generate routes, like paginated listings and category and author detail pages. In every case, like with this categories example, we use Astro’s getStaticPaths() to dictate exactly what routes need to be created, what data should be passed along, and how to display what’s there right in the same file! You’ll see these dynamic routes in any filename with brackets, like [whatever].astro. The whatever is magically available within that component.

In the case of these dynamic things, I’m consistently leaning on a simple method to generate slugs: https://github.com/drud/ddev.com-front-end/blob/main/src/lib/api.ts#L20-L28

This is why author detail pages are doing away with usernames (rfay) in favor of something more automatic (randy-fay). I’m blindly trusting that github-slugger is worth relying on for this since I found it in Astro’s source—but if that needs rethinking it should only be a matter of updating the getSlug() method and dealing with any relevant redirects.

The last step here was to add a simple plugin (I stole from somebody on the internet) that automatically assigns a layout to Markdown posts: https://github.com/drud/ddev.com-front-end/commit/ae0cf7b66d29b78bc7507cf31e74375d6194495f#diff-e0f0c5adbe0b9ca5d0b57caf5cea33a8d88899fd02a43df1e9862b185f8a1e5fR12-R24

The reason is because Astro wants you to designate a layout in the Markdown file’s frontmatter, which I think is excessive since it’ll probably always be the same and easy to forget. This tiny plugin looks at the .md’s containing folder, capitalizes the name, and looks for a corresponding layout component (blog/Blog/layouts/Blog.astro). If it finds one, it automatically appends that to the Remark frontmatter (layout: ../layouts/Blog.astro).

^ This technique may be obsolete now that they’ve introduced Content Collections, but I haven’t looked into those yet.

rfay commented 1 year ago

Thank you so much for that.

Seems like node-html-markdown was the thing I was wondering about more than others.

What's your feeling about the quality of the translation you got?

mattstein commented 1 year ago

What's your feeling about the quality of the translation you got?

So far it’s been looking really good! Nothing broken, nothing weird yet. If I have any real feedback it’ll be after the cleanup phase when I go through each post.