The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web (abbreviated WWW or W3). Founded and headed by Tim Berners-Lee,[2] the consortium is made up of member organizations which maintain full-time staff for the purpose of working together in the development of standards for the World Wide Web. As of 8 September 2009, the World Wide Web Consortium (W3C) has 356 members.[1]
W3C also engages in education and outreach, develops software and serves as an open forum for discussion about the Web.

Standards
W3C/IETF Standards (over Internet protocol suite):
               
               
http://www.3ainfocom.com


The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.

PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.

Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.

Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.

Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).

Let’s see how Google processes a query.

Search engine optimization refers to the tasks that are carried in order to maintain quality ranking and indexation of the website being visited through various major search engines.Search Engine Optimization is a two part process, firstly it is important to make sure that the major search engines know what keywords you would like your pages to rank for, this is done using "OnPage"Search Engine Optimization. Secondly the search engines need to know how popular your page is. To measure this, they look around the world at all the other websites that they have records for, and add up how many of them link to you, they also give each of these links a notional value. In short,Google,Yahoo and Bing run what is simply a massive popularity contest and the business with the largest volume of targeted, high quality links will rank first. This is called "Off-Page" Search Engine Optimization.
http://www.3ainfocom.com

Did you Know you could post your blog Posts to Twitter automatically  without having to open a single page on Twitter.I just got registered in this amazing facility at twitter feeds and believe me I dont have to post a tweet on every single new topic which appears in my company blog.
twitter bird logo




  1. Go to Twitterfeed.com,register or directly login if you have OpenID (your Blogger or WordPress blog URL). Put in your Twitter user credentials.
  2. Create a new Twitterfeed. Choose the feed URL as (Blogger): "http://YourBlogName.blogspot.com/atom.xml?redirect=false". This helps faster posting, as it is the raw feed. WordPress and other blogging systems, use the raw feed, not the FB-burned feed.
  3. Set other options and update the feed. As you post a new entry to your blog, it will be posted automatically to Twitter.
    http://www.3ainfocom.com

    RSS (Rich Site Summary) is a format for delivering regularly changing web content. Many news-related sites, weblogs and other online publishers syndicate their content as an RSS Feed to whoever wants it.




    Why RSS? Benefits and Reasons for using RSS
    RSS solves a problem for people who regularly use the web. It allows you to easily stay informed by retrieving the latest content from the sites you are interested in. You save time by not needing to visit each site individually. You ensure your privacy, by not needing to join each site's email newsletter. The number of sites offering RSS feeds is growing rapidly and includes big names like Yahoo News.
    What do I need to do to read an RSS Feed? RSS Feed Readers and News Aggregators
    Feed Reader or News Aggregator software allow you to grab the RSS feeds from various sites and display them for you to read and use.
    A variety of RSS Readers are available for different platforms. Some popular feed readers include Amphetadesk (Windows, Linux, Mac),FeedReader (Windows), and NewsGator (Windows - integrates with Outlook). There are also a number of web-based feed readers available.My Yahoo, Bloglines, and Google Reader are popular web-based feed readers.




    Once you have your Feed Reader, it is a matter of finding sites that syndicate content and adding their RSS feed to the list of feeds your Feed Reader checks. Many sites display a small icon with the acronyms RSS, XML, or RDF to let you know a feed is available.
    PageRank is a link analysis algorithm, named after Larry Page,used by the Google Internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references.

    Page Rank - Also known as Google Juice, is the rank that Google assigns individual web pages within your site.  Page Rank or PR is based on logarithmic 0 -10 scale which considers the content and transparency of your site according to Google's search bots as well as the number and quality of inbound and outbound links to the site.
    http://www.3ainfocom.com

    Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do. It is important to clarify that robots.txt is not a way from preventing search engines from crawling your site (i.e. it is not a firewall, or a kind of password protection) and the fact that you put a robots.txt file is something like putting a note “Please, do not enter” on an unlocked door – e.g. you cannot prevent thieves from coming in but the good guys will not open to door and enter. That is why we say that if you have really sen sitive data, it is too naïve to rely on robots.txt to protect it from being indexed and displayed in search results.

    The location of robots.txt is very important. It must be in the main directory because otherwise user agents (search engines) will not be able to find it – they do not search the whole site for a file named robots.txt. Instead, they look first in the main directory and if they don't find it there, they simply assume that this site does not have a robots.txt file and therefore they index everything they find along the way. So, if you don't put robots.txt in the right place, do not be surprised that search engines index your whole site.