Unlike humans, search engines are text-driven. Search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevance, and retrieving.
1. Crawling First, search engines crawls the Web to see what is there. This task is performed by a piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way.
2. Indexing After a page is crawled, the next step is to index its content. The indexed page is stored in a giant database, from where it can later be retrieved. Essentially, the process of indexing is identifying the words and expressions that best describe the page and assigning the page to particular keywords.
3. Retrieving When a search request comes, the search engine processes it – i.e. it compares the search string in the search request with the indexed pages in the database. The last step in search engines’ activity is retrieving the results. Basically, it is nothing more than simply displaying them in the browser – i.e. the endless pages of search results that are sorted from the most relevant to the least relevant sites.
Our goal is to reduce the rising skill gap and make digital marketers which can assist brands and businesses drive exponential growth. Learn Digital Marketing Courses From The Digital Sandbox (TDSB)