Essentially, this whole debate comes down to site architecture. Plus, the likelihood of search engine bots finding your pages.
As we’ve mentioned, you need search engine bots, crawlers and spiders to find your pages because if they don’t, well, you don't get indexed, rendered or ranked. And then it’s “bye, bye new human web visitors.”
With that being said, imagine your website as one big library of pages. We’ll refer to this big and stunning library as the domain.
Instead of keeping books, your special domain is the library that houses all your super special pages (web pages). These pages are so special, your library is visited by hundreds of people everyday.
Now assume you are the librarian, tasked with maintaining the welfare of the domain. You protect the integrity of the library and manage its performance (ensuring many people come to visit your library everyday).
Assignment number one, as librarian: decide how to add a new room to the library.
For the sake of your website, imagine each room in your library represents a new folder for your website. Each room offers an opportunity to group related pages in one place, making it easier for visitors to find more of the same kind of pages they like.
To expand your library (to add a room), you are given two options:
- Build a room inside the primary domain (the main library) - the subdirectory approach
- Build a separate house just outside your main domain - the subdomain approach