Especially within the past few years search engines have become more popular than ever. Take Google and Bing for instance. With data taken from Andrew Leibman’s article “How Search Engines Work,” We learn of the many different kinds of computer searching tools. First, lets talk about web crawlers. Web crawlers are “single-purpose applications that fetch data from the Web.” They scan the Web searching for hyperlinks and categorize them to look for the best results. Since speed is crucial, web crawlers begin by searching the most popular Web pages first. It’s a very complicated process, because the computer has to go through any steps and with so little time. The computer must now break the data from the hyperlinks by encoding (“converting data from ine form to another”) and hashing (“converting words and characters into abbreviated alphnumeric value”). with one word, we can find hundreds of thousands of results, but with the help of web crawlers, encoding and hashing, we can break information down to its simplest form and get the best reslut of information that we want.
-
Recent Posts
Recent Comments
- Michael.M on realizations
- evortega on Badke Appendix 1
- Michael.M on In the Know
- ibn4course on In the Know
- ibn4course on Making the Most Out of Search Engines
Archives
Categories
Meta
t a g s
- assignment
- assignments
- badke
- blogging
- blogs
- copyright
- Digital Media
- document
- fair use (copyright)
- future
- Gatekeepers
- government
- information
- information_evaluation
- Internet
- keywords
- knowledge
- Martin
- media
- online_documentation_project
- openlab
- Pavlik
- plagiarism
- privacy
- process_documentation
- reading
- readings
- research
- Research Strategies
- Right of
- right of privacy
- scholar
- search engine
- search engines
- Searching
- search_engines
- Social Media
- social networking
- technology
- Video
- wiki
- Wiki's
- wikipedia