How To Make Money Online By Understanding SEO Concepts

SEO And Organic Traffic Is Key To Online Business Success

Zafar Saleem
9 min readMar 14, 2023

As in online world and business, traffic is asset and money. Understanding SEO is essential to generate organic traffic to your websites. And it is one of the key marketing strategy(among many) for businesses. For that reason, there are many paid solutions however, SEO is one of them that is free of cost. However, it is not completely free as it does cost our time and effort. In this blog post I divided SEO in four categories and will learn and understand SEO on these perspectives.

  1. SEO & Search Engines
  2. Rendering Strategies & SEO
  3. Developers Contributions
  4. Content Writers To SEO Rescue

SEO & Search Engines

Crawling

Ever used wikipedia? I presume yes. When we read information on that platform we tend to find more links to get more information. Therefore, we keep on clicking on those links to dig deeper in the topic we are interested in. That is how we discover more web pages with relevant information. Land on an initial web page. Find more links that leads to other pages and websites. We keep on clicking on them to learn more.

And that is how search engines web crawling bots work in a nutshell(😂). Perhaps I explained above in a non geeky and techy way. Sometimes happens.

On more technical perspective, web crawlers or bots usually follow certain policies whether to crawl a certain page or not. For instance, importance of a page. If a certain page is linked by many high authoritative web pages to it. And it has significant amount of traffic and visitors. Then it could be regarded as high quality web page and for web crawlers to index such pages would be important.[1]

After visiting on first time, the web crawlers will need to re-crawl these web pages in the future as the internet is keep on changing and more and updated information are kept on added. Therefore, to keep the index up to date. Web crawlers tend to revisit already indexed pages to keep their index updated.

In addition, robots.txt(which I will explain later) are also the first file for web crawlers to land on. That is where web crawlers decide which page in current domain is available for indexing and which one to avoid. The pages mentioned in this file are avoided.[1]

Indexing

Since we keep on clicking on the discovered links in crawling phase. However, many of us forget or remember less due to dementia and amnesia(😜) reasons at some point in our lives.

However, search engines don’t as they use indexing when they discover new links and web pages and websites. In other words, they store those links in a database. In google’s context, it is called Caffeine.

When we search in google, google begin searching our query in their indexed database rather than searching through the live internet. Basically Caffeine algorithm indexes the web more frequently i.e. in real time then its predecessor algorithm which used to index the web few times in a month. It also improves the indexing of dynamic contents over the internet such as from facebook and other social media. platforms.[2]

Ranking

Now that search engines indexed that information in a database. It needs a bit more work. Their algorithm then ranks these pages and links based on many different criteria.

It needs to figure out the quality of the content. It does that by getting statistics of traffic to a particular page and site. In addition, backlinks are also one of the factor that helps in high ranking. The more a particular web page is linked in other websites and pages, the more it is authentic, contain valuable and relevant contents.

Moreover, the quality of the domain also matters from source traffic. If the source domain has valuable content with high traffic and it leads us to a certain page online, would have a better chance for the targeted web page to rank higher on SERPs.

Internal links are also one of the item in Caffeine and/or to similar algorithms that helps in high ranking on SERPs. If a particular web page has several reasonable links to other contents on the same domain, will help the page and site rank higher on SERPs(Search Engine Result Pages).[3]

All above and other factors is used by an algorithm called PageRank by Google.

Rendering Strategies

Now that we know how search engines crawl, index and rank our web pages. It is time to get into some of the concepts that will help search engines to rank our web pages higher on SERPs. If we publish contents and websites online would not most likely be getting noticed by search engines. It will require our effort for search engines to note our contents and web pages. Therefore, we need to follow certain best practices, tricks and techniques.

The categories I split this blog post into 2 of them belong to developers. The first one is Rendering Strategies. Developers at frameworks or library level add rendering strategies that helps to optimize our websites and web pages for monetization.

  1. SSG
  2. SSR
  3. ISR

1. SSG(Server Side Generation) & SEO

SSG is where when the development of website or webpage is completed, then we need to build it for our production environment. This step is necessary to optimize our website to enhance its performance. Performance is also one of the factor search engine algorithms use for ranking. And it also provides a better experience to end users, clients and customers.

a. What the hell is building?

Building process is basically to reduce the size of assets and resources used in our applications. Then put them on different servers called CDN(if the resources are divided in multiple files) or on the same domain(if they are few).

b. Why?

Building process is mandatory for both monolithic websites/applications as well as SPAs. SPAs are generally considered poor for SEOs therefore, SPAs are mostly used for applications where SEO might matter less. However, to make our website contents discoverable for search engine bots we need to follow monolithic concepts(static HTML with dynamic data) while following modern practices.

Search engines find it easy to crawl static html files. Therefore, developers use several rendering strategies to provide dynamic data as static web pages to get discovered easily online via search engines. One of the rendering strategy is SSG.

c. Now SSG

In SSG we build our project at built time. Therefore, when the client makes a request to the server to serve users with appropriate data and content, it finds already built static html pages on the server and render them on users displays.

2. SSR(Server Side Rendering)

While SSG is great for performance and SEO perspective however, large projects tend to cost large amount of build time. Therefore, an alternative solution was required and developers came up with SSR. With this approach, when the client makes a request to server then it builds those web pages upon request to static and html pages with dynamic data.

3. ISR(Incremental Static Regeneration)

Instead of rebuilding entire websites like above solutions which are way more time costly as the project grows. ISR is a solution to rebuild the webpages on per page basis makes it convenient for both developers and content writers to update their sites and pages incrementally. For more detailed insights on this click here.

Developers Contributions

Apart from contributing at high level frameworks perspectives, developers are also required to put their efforts to on page SEOs. This contribution happens at developing websites and web pages level. Below are few areas where developers can contribute even more to make websites and web pages discoverable via search engines bots.

  1. URL Structure And SEO
  2. Meta Tags Key To SEO
  3. HTML Semantics
  4. Files That Matter In SEO

1. URL Structure And SEO

URL structure, both top level and its children, needs to follow certain criteria. For instance, using words or keywords instead of numbers would make it easy for search engine bots crawl and discover more pages.

2. Meta Tags Key To SEO

Meta tags are usually added in between html’s head tags. These are the ones that search engine bots look at when they arrive to these pages therefore, they need to have information about the current serving pages rather than containing information at global level.

a. Title Tag SEO

The very first thing that search engines bots look at is the title tag. This needs to be relevant to the page’s information. Secondly, on SERP pages this is the piece of information that is highlighted with the target link. The more useful and the more optimized keywords the title contains would most probably hit the SERP initial pages.

b. SEO Description Via Meta Tag

Ever noticed on SERP pages additional description under the result top level link? That is where coming from the description meta tag. This information not necessarily is a huge factor in SEO ranking however, it is always a good idea to provide key information to users on SERP which could make them a potential visitor to your site and even returning users at times.

c. SEO Robots Meta Tag

Robots.txt file certainly would do the job to avoid certain pages from indexing however, robots meta tag could also be used on individual pages to either make them discoverable or not.

d. SEO Optimization via Open Graph

Humans are social animals by nature however, in today’s world humans are more virtual social animals. By which I mean that we prefer to be socially active virtually via online platforms such as facebook, twitter and instagram.

Search engine bots were designed only to crawl through static pages. However, open graph meta tag lets these bots to crawl through social media contents and posts to get discovered easily and serve users with valuable contents possible from these platforms.

3. HTML Semantics

HTML semantics allows us to structure our web pages with meaningful HTML tags such as section, div, article, aside, header, footer, nav, p etc. Any content goes into these tags gives the search engine bots contents meaning. The more semantically correct html the better crawlers would do the crawling. It might not be super vital for ranking but it does have certain factor for crawling the web pages on search engines perspective.

4. Files That Matter For SEO

A couple of files that does matter when it comes to SEO and developers are expected to include in their websites are below.

a. Robots.txt

This file usually exist at the root of your project and search engines read this before crawling the website. This file tells the crawler which pages to avoid to crawl and web crawling bot will follow those instruction. However, it is not strictly followed by all search engine bots. However, it is vital file if you avoid certain pages not to be indexed.

b. sitemap

Sitemap is an xml file which basically gives a navigation structure to search engine bots. It makes it easy for them to crawl through all the pages mentioned in this xml file which is also usually resides at the root folder of your project.

Content Writers To SEO Rescue

Now that we already have website ready and deployed to production. It is time to put some contents in there. This contents need to be meaningful, valuable and authoritative as elaborated in previous sections of this blog post. That is where our content writers play their roles.

For web, content writers also have to write contents that are optimized for search engines in addition to taking care of all traditional practices. For this they need to conduct thorough research on keywords regarding the topic of the contents.

Keywords are essential and helps them to figure out what people and user are mostly searching for on the web. The more accurate keywords used in their contents the more chances our web pages will land at top 3 on SERPs.

There are several tool available to make this possible. One of the prominent one if Ahref. I never used it personally but it keeps popping up on different articles online which makes it pretty credible tool to make use of it. Google trends can also give some insights on that.

These tools would not only help you finding proper keywords but also helps you write contents that are more readable and SEO focused.

In addition, if you would like to use browser extensions, I recently stumbled upon one of them which is Keywords Everywhere. This gives some insights on google SERP about the keywords you just googled.

Conclusion

In this blog post I categorically divided the SEO in four categories. I went on an explain them without giving any technical implementation(which is out of the context of this blog post). Initially I elaborated how search engines work. Later on the contribution of developers explained on two different levels. And finally the contribution of content writers.

I hope you liked this blog post. If you would like read more blog posts on other topics then have a look at below. Good bye for now.

More Articles & blog posts from Zafar

https://betterprogramming.pub/event-base-architecture-using-react-c0600d29d5ae

https://zafarsaleem.medium.com/how-to-hack-algolia-search-to-enhance-react-1fc63e95ccab

https://zafarsaleem.medium.com/aws-dynamodb-nextjs-building-real-world-app-714f439fa059

https://javascript.plainenglish.io/github-actions-aws-and-react-app-98ef1847ed5

https://zafarsaleem.medium.com/creating-a-next-js-architecture-that-scales-how-to-set-up-the-development-tools-f7db99321a10

https://towardsdev.com/introduction-to-debouncing-in-javascript-387d17a9eb75

https://zafarsaleem.medium.com/introduction-to-throttling-using-javascript-fe7a64ee3be3

https://zafarsaleem.medium.com/blockchain-introduction-using-real-world-dapp-react-solidity-web3-js-546471419955

Sources

[1]https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/

[2]https://www.acmeinfolabs.com/blog/google-caffeine-algorithm-a-complete-guide/

[3]https://moz.com/beginners-guide-to-seo/how-search-engines-operate

--

--