Tuesday 21 October 2014

WEBSITE DESIGNING

how to design a website
Web Design is the process of collecting ideas, and aesthetically arranging and implementing them, guided by certain principles for a specific purpose. 
Encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic 
design; interface design; authoring, including standard code and proprietary software; user experience design; and search engine optimization

Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all Web design is a similar process of creation, with the intention of presenting the content on electronic web pages, which the end-users can access through the internet with the help of a web browser.



Tools And Technology 


The W3C has released new standards of HTML (HTML5) and CSS (CSS3), as well as new JavaScript API's, each as a new but individual standard. However, while the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript API's, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript).
Web designers use a variety of different tools depending on what part of the production process they are involved in. These tools are updated over time by newer standards and software but the principles behind them remain the same. Web graphic designers use vector and raster graphics packages to create web-formatted imagery or design prototypes. Technologies used to create websites include standardised mark-up, which can be hand-coded or generated by WYSIWYG editing software. There is also proprietary software based on plug-ins that bypasses the client’s browser versions. These are often WYSIWYG but with the option of using the software’s scripting language. Search engine optimisation tools may be used to check search engine ranking and suggest improvements




Elements of Web Design

Web design uses many of the same key visual elements as all types of design such as

PAGE LAYOUT  This is the way the graphics, ads and text are arranged. In the web world, a key goal is to help the view find the information they seek at a glance. This includes maintaining the balance, consistency, and integrity of the design.Part of the user interface design is affected by the quality of the page layout. For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout. Page pixel width may also be considered vital for aligning objects in the layout design. The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size. Most pages are also center-aligned for concerns of aesthetics on larger screens.

Fluid layouts increased in popularity around 2000 as an alternative to HTML-table-based layouts and grid-based design in both page layout design principle and in coding technique, but were very slow to be adopted. This was due to considerations of screen reading devices and varying windows sizes which designers have no control over. Accordingly, a design may be broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can. As the browser does recognize the details of the reader's screen (window size, font size relative to window etc.) the browser can make user-specific layout adjustments to fluid layouts, but not fixed-width layouts. Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it. This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window. In particular, the relative position of content blocks may change while leaving the content within the block unaffected. This also minimizes the user's need to horizontally scroll the page.

 Page Color The choice of colors depends on the purpose and clientele; it could be simple black-and-white to multi-colored design, conveying the personality of a person or the brand of an organization, using web-safe colors.
To make your site attractive for user. 

Motion Graphics 

Graphics can include logos, photos, clipart or icons, all of which enhance the web design. For user friendliness, these need to be placed appropriately, working with the color and content of the web page, while not making it too congested or slow to load.


The page layout and user interface may also be affected by the use of motion graphics. The choice of whether or not to use motion graphics may depend on the target market for the website. Motion graphics may be expected or at least better received with an entertainment-oriented website. However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes. This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content. In either case, motion graphic design may make the difference between more effective visuals or distracting visuals.


Page Fonts  The use of various fonts can enhance a website design. Most web browsers can only read a select number of fonts, known as "web-safe fonts", so your designer will generally work within this widely accepted group.

Homepage  Homepage is the most important page on a website.However practitioners into the 2000s were starting to find that a growing number of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds.Leading many practitioners to argue that homepages are less important than most people think. Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website



LIST OF SOME LEANEST WEBSITE HOMEPAGE



WEB DESIGNING








Sunday 5 October 2014

What is Web Search Engine

web search engine is a software system that is designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, and other types of files.The first search engine ever developed is considered Archie, which was used to search for FTP files and the first text-based search engine is considered Veronica. Today, the most popular and well known search engine is Google.

Because large search engines contain millions and sometimes billions of pages, many search engines not only just search the pages but also display the results depending upon their importance. This importance is commonly determined by using various algorithms.



On the Internet, a search engine is a coordinated set of programs that includes:


    • spider  (also called a "crawler" or a "bot") that goes to every page or representative pages on every Web site that wants to be searchable and reads it, using hypertext links on each page to discover and read a site's other pages
    • A program that creates a huge index (sometimes called a "catalog") from the pages that have been read
    • A program that receives your search request, compares it to the entries in the index, and returns results to you.
    Once a page has been crawled the data contained within the page is processed, often this involves stripping out stop words, grabbing the location of each of the words in the page, the frequency they occur, links to other pages, images, etc. This data is used to rank the page and is the primary method a search engine uses to determine if a page should be shown and in what order.
    Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.

    -----------------------------------------------------------------------------------------------------------------------------------------------

    How Web Search Engine Work

    A search engine operates in the following order:
    1. Web crawling
    2. Indexing
    3. Searching
    Web search engines work by storing information about many web pages, which they retrieve from the HTML markup of the pages. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web crawler which follows every link on the site. The site owner can exclude specific pages by using robots.txt.
    The search engine then analyzes the contents of each page to determine how it should be indexed (for example, words can be extracted from the titles, page content, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query from a user can be a single word. The index helps find information relating to the query as quickly as possible. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find.This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment, since the user normally expects that the search terms will be on the returned pages. Increased search relevance makes these cached pages very useful as they may contain data that may no longer be available elsewhere.

    When a user enters a query into a search engine (typically by using keywords), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. From 2007 the Google.com search engine has allowed one to search by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range.
     Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com.
    The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work.
    Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to have their listings ranked higher in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.


    Google search engines and other major search engines like Bing and Yahoo use large, numerous computers in order to search through the large quantities of data across the web.
    Web search engines catalog the world wide web by using a spider, or web crawler. These web-crawling robots were created for indexing content; they scan and assess the content on site pages and information archives across the web.

    Algorithms And Determining The Best Search Engines


    Different internet search engines use different algorithms for determining which web pages are the most relevant for a particular search engine keyword, and which web pages should appear at the top of the search engine results page.
    Relevancy is the key for online search engines – users naturally prefer a search engine that will give them the best and most relevant results.
    Search engines are often quite guarded with their search algorithms, since their unique algorithm is trying to generate the most relevant results. The best search engines, and often the most popular search engines as a result, are the ones that are the most relevant.


    Search Engine History


    Search engine history all started in 1990 with Archie, an FTP site hosting an index of downloadable directory listings. Search engines continued to be primitive directory listings, until search engines developed to crawling and indexing websites, eventually creating algorithms to optimize relevancy. 
    Yahoo started off as just a list of favorite websites, eventually growing large enough to become a searchable index directory. They actually had their search services outsourced until 2002, when they started to really work on their search engine. 

    ------------------------------------------------------------------------------------------------------------------------------------------------------


    Using Search Engine 


    There is a plenty of material available on net. To take an effective on the web, you need to select the search site and compose the criteria carefully.

    • Visit a Search Engine. One of the important search engine is Google. To visit this site type https://www.google.com  in the address bar and press enter key.                                                                                         
    • Conduct the search. To conduct search on Google, type the text you wish to search for in the search for : text box For example : Type 'microinfoweb'.                                                                                                     
    • Click the Google Search button to begin the search.                                        
    • View the search result. search engine will show you the best matched pages. You can click on Next > Hyperlink at the bottom of the page, to see more results.                                                                                                 
    • You can click on hyperlink to see the web page. Some Search Engine are-   

    1. www.google.com
    2. www.yahoo.com
    3. www.bing.com
    4. www.yandex.com              


    The web also provided facility to download programs and files easily. Just right click on the Download Link. Chose Save Target As option and Click on Save button after specifying the file name.    




    Saturday 4 October 2014

    What Is Search Engine Optimization



    Search Engine Optimization ( SEO )  is the process of getting the
    visibility of a website or a webpage in search engine's index from the "free", "organic" , "editorial" or natural search result on search engines. According to search engines old and popular (or higher rank on the search result page.) and more promising site appear in search result list. That site get more visitor receive from the search engines's, compered to unoptimized site.
    SEO may target different type of search including -:
    • VIDEO SEARCH
    • LOCAL SEARCH
    • IMAGE SEARCH
    • OTHERS
    As an Affiliating marketing  strategy ,SEO consider how search engine work,what people search for the accurate search keyword wrote into search engines and search will show you the best matched webpage or website .The plural of the abbreviation SEO can also refer to "search engine optimizer", those who provide SEO services.

    --------------------------------------------------------------------------------------------------------------------------

    Optimizing A Website may introducing its HTML and associated coding format
    to both increase its relevance to actual keywords and remove berries to the indexing activities of search engines. which help to upgrade the traffic source of a website or a webpage. 


      ------------------------------------------------------------------------------------------------------------------------



    How Search Engine Are Find Website search engine is an application that help user in finding information or document from the ocean of knowledge
    i.e Internet. search engine want to do their job as well as possible by preferring user to website and content most relevance to actual keywords.  
    So there are 4 basic things to keep maintain. 
    • Content   provide proper information and write great unseen content, then chose                         small and accurate tittle and description.                                                       
    • Design    select the attractive design to make your website expensive and                                     beautiful for great user experience.                                                             
    • Performance  check your website daily, how fast is your website and dose it                                         work properly ?                                                                                    
    • User experience website look greet and easy to navigate, look safe. It make                                            high traffic rate.

    --------------------------------------------------------------------------------------------------------------------------


    Techniques of Search Engine there are different type of search engines and  search engines companies  are available on internet but they are using there own webmaster tool application for optimizing website.



    •  Getting Indexed  the leading search engines such as Google, Bing, Yahoo   use crawler to find pages for  their algorithmic search result. there are two major-         directories.
    1.    YAHOO DIRECTORIES
    2.    DMOZ
    They both require manual submission  and human editorial review. But Google proposal   Google webmaster tools for which XML sitemap feed can be created and submitted for free.
    Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled


    --------------------------------------------------------------------------------------------------------------------------


       History      Webmasters and content providers began optimizing sites for search engines in the mid-1990 s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed. The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.
    Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, . According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997. On May 2, 2007,Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona that SEO is a "process" involving manipulation of keywords, and not a "marketing service.
    Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches. Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.
    By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links. PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
    Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.
    By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals. The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions.
    In 2005, Google began personalizing search results for each user. Google crafted results for logged in users. In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.
    In 2007, Google announced a campaign against paid links that transfer PageRank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting. As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.
    In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.
    On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index...
    Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.
    In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.
    In April 2012, Google launched the Google Penguin update the goal of which was to penalize websites that used manipulative techniques to improve their rankings on the search engine.
    In September 2013, Google released the Google Hummingbird update, an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages. [ NOTE- History is written with the reference of Wikipedia.]

    --------------------------------------------------------------------------------------------------------------------------


    White Hat Versus Black Hat Techniques SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing. An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.                                                                                                                                                                                                                                                                             Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalised however do not act in producing the best content for users, rather entirely focused on improving search engine rankings.
    Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.

    --------------------------------------------------------------------------------------------------------------------------


    On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."
    In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.
    List of Search Engines There are so many kind of search engines available   on Internet this list show some major search engines are. 
    • GOOGLE                                                                                                                                   
    • YAHOO                                                                                                                                   
    • BING