A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. This process is called Web crawling or spidering.
Web Crawler @seofreelancers cochinMany sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses.
A Web crawler is one type of bot, or software agent which starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
Other terms for Web crawlers are ants, automatic indexers, bots, or Web spiders, Web robots, or Web scutters.

The description tag you are using in your web pages can be found by viewing the source of the page you are looking at. Internet Explorer > view > Page source; Internet Explorer > view > source.

Description tags serve a double purpose. First, they're one of the first places a search engine looks for keywords.

The Description tag is not just another place to include all kinds of SEO keywords, however. On many search engines, the description tag is what they'll show as your blurb when your website appears on their SERP (Search Engine Results Page).
In other words, this may very well be the first point for a visitor to contact you.
There are two purpose of Description tags:
·         First, in SEO, to get your page ranked higher on the SERP by including your most important keywords in this prominent place.

·         Second, to attract visitors to your site by presenting them with a brief, overview of your business and services.

Conciseness is the keyword towards a good description tag. Decide on your most important keyword, and use it at least twice. If you can't figure out how to include that keyword twice and still write an appealing description, then either find someone who can teach you, or hire someone to write the Description tags for you.
Choose two or three other keywords that you feel are important, and include them once. Its always better to double check on the spellings and grammer in the description tag.
When thinking of what to put in the meta tags, see it as a benefit of what ever you have put in your title tag. For instance say your selling shoes, and the title is about a nice pair of joggers, your description tag might be ‘Selling a range of comfy, durable and hard-working Joggers, with a 10 year guarantee’. Make sure you get your keywords in there.
While Google and AOL may not place high regard on the title tag, you still may pick up extra traffic from Yahoo!, MSN or AltaVista.


Everybody use Google and it’s by far the best and most popular search engine on the Internet. Have you ever asked yourself how does Google decide which site ranks first and which site appears on the last page of the search results? Of course, it use your keywords to determine which site is the most relevant to your query, but how does Google decide between two sites with the exact same title? You can even push it further: How would Google decide which site to rank first between to sites with the same content and same title?
Google ranking algorithm is called PageRank and was developed by Google’s founders Larry Page and Sergey Brin at Standford University. This is the core of Google’s ranking even if a lot of other aspects will influence the rankings. Here’s how Google define their PageRank algorithm:

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page’s value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at considerably more than the sheer volume of votes, or links a page receives; for example, it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.” Using these and other factors, Google provides its views on pages’ relative importance.
Of course, important pages mean nothing to you if they don’t match your query. So, Google combines PageRank with sophisticated text-matching techniques to find pages that are both important and relevant to your search. Google goes far beyond the number of times a term appears on a page and examines dozens of aspects of the page’s content (and the content of the pages linking to it) to determine if it’s a good match for your query.

So there you go, the more back links a website has, the more chances it has to be highly ranked. The quality of the websites pointing to another also has a lot of influence. I think it’s probably the best technique so far as your search will return you the site that seems to be the most popular for the query you entered. This is where it gets tricky as more popular doesn’t necessarily mean best quality. Let’s say you build a new website about technology with a lot of quality content and that is built with the highest standards of the industry. Now you write a new website about the new MacBook Air hoping to get some traffic when people search for MacBook Air on Google. Even if your website is the best available for that keyword, it will be returned on maybe page 15-20 of Google search results. Why? Because you don’t have any links pointing to your website, so poor PageRank. It might be also almost impossible for you to rank on the first Google search results page because you might be competing with websites having millions of back links.

That’s where Google ranking gets tricky. Most of the users don’t go any further than 30 results and sometimes you could be surprised by the quality of the links you would find on page 7-8. The websites might be a little less popular but that doesn’t mean they don’t offer quality content. It’s hard work today in the Internet industry to get a good rank on Google because you fight with people and companies that wants to be first on Google and have full time employees whose job his to optimize the rankings. There’s a lot of techniques to improve your rankings other than back links and it’s called SEO (Search Engine Optimization). This is out of the scope of this article though.

So there it is, that’s how Google works and how it decides which site should be ranked first for specific keywords. It has a lot of advantages but at the same time, smaller websites with quality content struggle to get decent traffic. That’s the downside.
The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web (abbreviated WWW or W3). Founded and headed by Tim Berners-Lee,[2] the consortium is made up of member organizations which maintain full-time staff for the purpose of working together in the development of standards for the World Wide Web. As of 8 September 2009, the World Wide Web Consortium (W3C) has 356 members.[1]
W3C also engages in education and outreach, develops software and serves as an open forum for discussion about the Web.

Standards
W3C/IETF Standards (over Internet protocol suite):
               
               
http://www.3ainfocom.com


The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.

PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.

Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.

Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.

Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).

Let’s see how Google processes a query.

Search engine optimization refers to the tasks that are carried in order to maintain quality ranking and indexation of the website being visited through various major search engines.Search Engine Optimization is a two part process, firstly it is important to make sure that the major search engines know what keywords you would like your pages to rank for, this is done using "OnPage"Search Engine Optimization. Secondly the search engines need to know how popular your page is. To measure this, they look around the world at all the other websites that they have records for, and add up how many of them link to you, they also give each of these links a notional value. In short,Google,Yahoo and Bing run what is simply a massive popularity contest and the business with the largest volume of targeted, high quality links will rank first. This is called "Off-Page" Search Engine Optimization.
http://www.3ainfocom.com

Did you Know you could post your blog Posts to Twitter automatically  without having to open a single page on Twitter.I just got registered in this amazing facility at twitter feeds and believe me I dont have to post a tweet on every single new topic which appears in my company blog.
twitter bird logo




  1. Go to Twitterfeed.com,register or directly login if you have OpenID (your Blogger or WordPress blog URL). Put in your Twitter user credentials.
  2. Create a new Twitterfeed. Choose the feed URL as (Blogger): "http://YourBlogName.blogspot.com/atom.xml?redirect=false". This helps faster posting, as it is the raw feed. WordPress and other blogging systems, use the raw feed, not the FB-burned feed.
  3. Set other options and update the feed. As you post a new entry to your blog, it will be posted automatically to Twitter.
    http://www.3ainfocom.com