Case Study: Optimizing a Next.js News Site - Exploration into Performance, Accessibility, Best Practices, and SEO

Case Study: Optimizing a Next.js News Site - Exploration into Performance, Accessibility, Best Practices, and SEO

An In-depth Analysis of Enhancing Web Application Performance, Accessibility, User Experience, and SEO

Table of contents

In the whirlwind world of web development, evolution is the only constant. New tools and techniques take the stage every day, each armed with its own unique set of challenges and benefits. Recently, I took a personal challenge with Next.js

My mission? To max out the performance of a client's Next.js news site. My target was set to hit a 100% performance score on Google’s PageSpeed Insights. However, I didn't quite hit my mark. But hey, a grand 95% average score still isn’t too shabby, right?

As a developer, I firmly believe understanding the 'how' isn’t enough. The 'why' is equally crucial. So, fasten your seatbelt and prepare for a thrilling ride, because together, we're going to dive down the rabbit hole, get neck-deep in code, and uncover what it takes to crack open a high-performance Next.js application.

This journey isn't just personal to me; it's about sharing the fight, the struggle, the learning, and ultimately, the triumph. I hope that by unveiling my experience, you will walk away with valuable insights to face your enhancement challenges head-on. And remember, it's not about the destination, it's about the journey. Ready to embark? - Let's get technical!

Optimization Strategy

The optimization process was not a straightforward task but rather a series of iterative updates and tweaks designed to gradually improve overall performance. The strategy involved diving deep into each aspect of the site - from code structure to SEO practices, and making specific adjustments to maximize efficiency. Here are some of the key steps in the optimization process:

Modularity in Codebase

One initial area of focus was modularity in the large codebases. The application had grown complex over time I built it last year and have been upgrading it ever since so it became essential to break down the code into independent parts.

Here's the deal: every function used in your application will inevitably use memory and time (space-time complexity). When functions duplicate inefficiently, they consume more memory—gradually chipping away at performance and making the application perform poorly. That’s a situation we’d rather avoid.

To fix this I opted to systematically decompose the codebase into focused, independent parts—cleaning up the mess, and making the application lean, mean, and fighting fit lol🤣!

The example below paints this transformation vividly. Initially, I had a large component housing essential logic. This code was responsible for rendering items fetched from external resources, and complete with pagination to boot!

 function PaginatedItems({ itemsPerPage }) {
  const [currentItems, setCurrentItems] = useState(null);
  const [pageCount, setPageCount] = useState(0);
  const [itemOffset, setItemOffset] = useState(0);

  useEffect(() => {
    const endOffset = itemOffset + itemsPerPage;
    setCurrentItems(items.slice(itemOffset, endOffset));
    setPageCount(Math.ceil(items.length / itemsPerPage));
  }, [itemOffset, itemsPerPage]);

  const handlePageClick = (event) => {
    const newOffset = (event.selected * itemsPerPage) % items.length;
    setItemOffset(newOffset);
  };

  return (
    <>
      <Items currentItems={currentItems} />
      <ReactPaginate
        onPageChange={handlePageClick}
        pageCount={pageCount}
      />
    </>
  );
}

Then, I moved the functionality to render items into its component named Items. This component was now independent and could be used freely throughout the application.

const Items = ({ currentItems }) => {
  return (
    <>
      {currentItems &&
        currentItems.map((item) => (
          <div>
            <h3>Item #{item}</h3>
          </div>
        ))}
    </>
  );
};

Taking a step further, I modularized this into reusable functions and all duplicate functions in my codebase. For example, the formatDate() function, which takes an unformatted timestamp and returns a formatted date string, became a module. This modular approach gave the core logic some much-needed breathing space.

export function formatDate(inputDate) {
  if (!inputDate) {
    return "";
  }
  inputDate = inputDate.substring(0, inputDate.length - 1);
  let date = new Date(inputDate);
  let options = { year: "numeric", month: "short", day: "2-digit"};
  let formattedDate = date.toLocaleString("en-US", options);
  formattedDate = formattedDate.replace(" ", ".");
  return formattedDate;
}

By componentizing pages and function modularisation, the code was now more readable, and robust, and most importantly, its performance improved. After all, every byte, much like every penny, counts!

So, here’s the takeaway: Always remember that the heart of programming lies in solving problems, not in complicating them. --- Embrace modularity

Server-Side Rendering (SSR) vs Static Site Generation (SSG)

Next, I sought to find a balance between Server-side Rendering (SSR) and Static Site Generation (SSG). SSR’s method of generating new HTML for every request added a significant load to the server as the page logic and numbers grew and resulted in slower page loads.

I must confess, ever since Server Side Generation made its entrance in web dev, it's been a bit of a pet favorite for me. There's something irresistibly cool about it. But here's the kicker: while it's undeniably fascinating, in my opinion, its real charm lies in smaller applications and standalone pages.

So for my case, I had to refactor my pages to Static props where possible and implement Incremental Static Regeneration (ISR) for pages in need of fresh data. By using ISR, I was able to update static content after deployment without the need to rebuild the entire site. This improved my performance score hugely.

-let me know if I must write a technical post on SSG and ISR

Image Optimization

Images are most often the largest assets on a webpage, and optimizing them always proves to be a vital step in improving page load times. By default the <Image/> from Next.js does the job, the tag optimizes diverse image formats— catering to all sizes while ensuring the quality is not tossed overboard.

And ooh! the 'alt text' It might seem like a mere supportive tool but wields a powerful impact, notably affecting SEO performance. Here's an example showcasing the correct usage of the Next.js Image with the alt attribute:

import Image from 'next/image'
    <Image
      src="/me.png" // Route of the image file
      height={500} // Desired size
      width={500} // Desired size
      alt="A picture of me coding nextjs" // A description for SEO purposes
    />

SEO and Accessibility

SEO optimization was a priority, given the importance of unique and accurate title/meta descriptions for each page in search engine results. By implementing the Open Graph protocol, I was able to enrich link previews with a title, description, and image, making them both attractive and informative. To make this task easier, I used the NextSEO package which handled adding all the relevant meta tags. Lastly, I made sure to provide alt text to every image.

      <NextSeo
        title={`${post.title}-Daily Mirror News`}
        description={`${post.summary}`}
        canonical={`https://dailymirrornews.co.zw/news/${post.slug.current}`}
        openGraph={{
          url: `https://dailymirrornews.co.zw/news/${post.slug.current}`,
          title: `${post.title}-Daily Mirror News`,
          description: `${post.summary}`,
          images: [
            {
              url: `${urlFor(post.mainImage).url()}`,
              width: 1224,
              height: 724,
              alt: "Post image",
              type: "image/jpeg",
            },
          ],
          site_name: "Daily Mirror News",
          type: "article",
          article: {
            publishedTime: post._createdAt,
            modifiedTime: post._updatedAt,
            authors: [`${post.author.name}`],
            section: `${post.categories.title}`,
            tags: post.tags,
          },
        }}
        additionalLinkTags={[
          {
            rel: "icon",
            href: "https://dailymirrornews.co.zw/logo.png",
          },
          {
            rel: "apple-touch-icon",
            href: "https://dailymirrornews.co.zw/logo.png",
            sizes: "76x76",
          },
        ]}
        facebook={{
          appId: "your-app-id",
          publisher: "https://dailymirrornews.co.zw/",
        }}
        twitter={{
          handle: "@mirrornewspaper",
          site: "@mirrornewspaper",
          cardType: "summary_large_image",
        }}
      />

Okay, what that hack is OpenGraph in SEO?

Open Graph is a protocol that allows web pages to become rich objects in a social graph, presenting shared web content in a more engaging way on social platforms like Facebook or Twitter.

When you share a URL on social platforms like Facebook, Twitter, or LinkedIn, they scrape the page for Open Graph meta tags to generate a preview - an image, a title, and a description. By defining these Open Graph tags in your application, you can control what these previews look like and give them a more appealing appearance, encouraging users to click on your shared content.

Here is my graph in the /news/[slug].jsx page

    openGraph={{
          url: `https://dailymirrornews.co.zw/news/${post.slug.current}`,
          title: `${post.title}-Daily Mirror News`,
          description: `${post.summary}`,
          images: [
            {
              url: `${urlFor(post.mainImage).url()}`,
              width: 1224,
              height: 724,
              alt: "Post image",
              type: "image/jpeg",
            },
          ],
          site_name: "Daily Mirror News",
          type: "article",
          article: {
            publishedTime: post._createdAt,
            modifiedTime: post._updatedAt,
            authors: [`${post.author.name}`],
            section: `${post.categories.title}`,
            tags: post.tags,
          },
        }}
  • url is the canonical URL of your page that will be its permanent ID in the graph, typically a permalink.

  • title is the title of your article without any branding such as your site name.

  • description is a one to two sentences description of your content.

  • images is an array of image objects, which consist of properties url, alt, and optionally type, width and height. Represents images that are shown when the webpage is shared across social platforms.

  • site_name is the name that appears in the attribution of your content.

  • type describes the type of media used, in this case, an 'article'.

  • article is an object that contains properties specific to articles. It could include posted time, authorship, section, and tags associated with the article.

By properly defining these Open Graph tags, I ensure that the shared content on social media maintains its integrity, helping to drive more traffic from these platforms.
Here is the result of my graph above :

Core Web Vitals

Lasty Core Web Vitals! this is something i am still learning and I thing count up to the reason why I got overall score of 95%.

Core web vitals are a set of metrics that Google considers important in a webpage's overall user experience. They are part of Google's page experience signals used in ranking web pages in search results. The Core Web Vitals consist of three specific page speed and user interaction measurements:

  1. Largest Contentful Paint (LCP): This measures the loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  1. First Input Delay (FID): This measures interactivity. To provide a good user experience, pages should have an FID of less than 100 milliseconds.
  1. Cumulative Layout Shift (CLS): This measures visual stability. To provide a good user experience, pages should maintain a CLS of less than 0.1.

In my case, I've achieved fair results with an FCP of 0.7s, an LCP of 0.9s, and a CLS of 0. However, my TBT (Total Blocking Time) was 120ms, which is above the recommended 50ms. TBT measures the total time between First Contentful Paint (FCP) and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness.

To optimize these metrics in a Next.js application, It is recommended to consider the following :

  • Code Splitting: Next.js supports automatic code splitting, meaning that each page only loads the JavaScript necessary for that page. This can significantly improve performance.
  • Optimizing Images: Next.js has an Image component that optimizes image loading for your application. ✅
  • Server-side Rendering (SSR): Next.js allows you to generate server-side rendered HTML for each page in advance, which can result in better performance and SEO.
  • Static Site Generation (SSG): For pages that can be prerendered, Next.js supports Static Site Generation, which can improve performance by serving static files.✅
  • Using a CDN: A Content Delivery Network can serve your content, including images, from edge servers around the globe, reducing the time users have to wait for content to load.

This effort to optimize this Next.js news application has proven to be rewarding. Achieving these scores was no small feat, but it proves that, through thoughtful consideration, it is possible to build powerful, robust web applications without compromising on performance, accessibility, user experience, or SEO.

But it doesn't stop here lol. There's always room for improvement as you have seen with the core web vitals. The journey to perfect optimization is ongoing, and I’m excited about the future improvements in store with Next.js . I hope sharing this case study provides valuable insights and strategies as you embark on your own optimization journey.

Pfeww🥵 yeah that was s long stretch but quite interesting eh? ...