Search Engine Indexing and Crawling

When user enters keywords in the search engine, a list of results are found that best-matches the keywords of the user. These results are found based on a huge amount of pages stored in the database of the search engine. To be able to obtain such results, some processes must be done first to build the database which will be searched then when the user does the search. Mainly two basic sub-processes are done to build the database:

1. Crawling: In the crawling (also called spidering or robotics) process, the search engine begins to discover the web or the pages of the overall sites on the web. It performs this by beginning to download pages from some sites. Regularly, it will begin with some small sites stored in its database. When it crawls some initial site, it will observe the links in those sites. If the search engine discovered new link that is not stored in the initial database, it will append it to the list to be crawled later. In the new links discovered, it may also discover new links that will be appended also to the list of the sites that will be crawled soon. Note that as the sites are crawled, it will update its list of sites that are discovered to be new.

These processes are repeated continuously without stopping to discover the changing content on the web. So every new link is discovered will lead to crawling of this page and may lead to crawling the entire site. This is because when the spider crawl a page from a site, it will look also for a links to other pages in the same site as well as links to external sites. Thus, it is important for website owners to build such links to get visibility to search engine. The more links they build, the more frequently their sites will be crawled and updated if it was indexed.

2. Indexing: once the search engine collects the pages from the sites crawled in the crawling process, it will feed them to the indexing algorithm. Mainly, the indexing algorithm compares or rankings the related pages with each other so that when users make a search for a keyword, it will then extract the pages with the highest rank. Each search engine has its own algorithm for indexing and ranking. when ranked I will be put in the database with the specified rank.

One may imagine that only the keywords on the page controls the rank but recently there are a key factor that controls hat ranking which is the backlink. Mainly the concept of backlinks is related to voting and reachability keywords because an existing link to a page means that the page is good for that site and this it effectively votes for it. Also the page will be reachable from that site. Recently, the search engines are concerned with the concept of reachability, they say that if one browse randomly through the sites on the web, …

Moving Microsoft Dynamics GP From One SQL Server to Another

If your company deploys Microsoft Great Plains as corporate ERP, at some day you will come to this IT routine on how to migrate your accounting/MRP system from one physical computer to another. As IT person you may or may not have a lot of GP architecture knowledge, this is why we would like here to orient you in this subject of transferring. We would like to stress here, that this routine of transfer typically is something that should be done by professional consultant, so you do not have us responsible for consequences if you have complications: customization, reports modification, complex integrations to name a few

o Great Plains Architecture. Microsoft Dynamics GP is MS SQL Server based application. GP workstation connects to SQL database via ODBC, this means that when migration is done, you should update ODBC connection settings to point out to new SQL Server

o GP Security Model. This is the most important part in understanding the migration scenario. Great Plains initially targeting multiple database platforms: Pervasive SQL/Btrieve, Ctree/Faircom, and MS SQL Server plus several networking platforms, such as MS Windows, Novell and Macintosh – and by doing so – it doesn’t use Windows Domain logins or even SQL Server logins. Instead it uses it own security model, built in DYNAMICS database. GP security model realization for MS SQL Server platform utilizes MS SQL Server logins (stored in master database) and grants them access to DYNAMICS and each company databases. Additional complication here is this – GP user password (as you enter it in GP interface when you login) is actually encrypted for your user login in MS SQL Server – this is why you can not just use your GP user login to register GP SQL Server in MS SQL Server Enterprise Manager

o Installing GP on the new Server. When you are done with SQL installation (don’t alter sorting options), you will need to install Microsoft Dynamics GP with empty companies with the same companies databases ID (where you used similar GP account structure segmenting – this is technically not required, but will make your life easier, if something goes wrong)

o Now security transfer surgery. If you feel that you are very strong with DTS – you may use this tool. If you are not at this level of SQL DB administration, please login to GP customer source and find current techknowledge on users logins transfer from one SQL server to another.…