Wednesday, February 19, 2014

Performance Tuning : SQL Server Part -1

Performance Tuning : SQL Server Part -1


I always prefer to create index many does not agree but I believe this is the step where you don’t need to build no deployment and no code change , so why should we not try ?
There are two type of Index basic index and one more hybrid

           1)      Cluster Index
           2)      Non Cluster index
               *      covering indexes (may be New but interesting)

So first look into each and every table and create Primary Key, because primary key will automatically create cluster index and “select “execution will become fast

Create non-clustered indexes on columns which are:

  • ·         Used to join other tables
  • ·         Frequently used in the search criteria
  • ·         Used in the ORDER BY clause
  • ·         Used as foreign key fields
  • ·         Of having high selectivity (column which returns a low percentage (0-5%) of rows from a total number of rows on a particular value)
  • ·         Of type XML (primary and secondary indexes need to be created; more on this in the coming articles)


Now what if you want to search employee with his name and designation + LocationFortunately, there is a way to implement this feature. This is what is called "covered index". You create "covered indexes" in table columns to specify what additional column values the index page should store along with the clustered index key values (primary keys). Following is an example of creating a covered index

CREATE INDEX NCLIX_Employer_EmpID--Index name
ON dbo.EmpMaster(EmpName)--Column on which index is to be created
INCLUDE(EmpDes, EmpLoc)--Additional column values to include

Now you created Index but what if it gives you performance in Test environment not in production ?
You will thought is it possible ? Yes it is possible because Sql engine generate different query plan based on
  • ·         Volume of Data
  • ·         Index variance
  • ·         Parameter Value in SQL SP
  • ·         Load

So is there any way to create same environment as production???
Yes…  
That’s call Database Tuning Advisor’s Help with Profiler.

1.       Use SQL Profiler to capture traces in the production server. Use the Tuning template (I know, it is advised not to use SQL Profiler in a production database, but sometimes you have to use it while diagnosing performance problems in production). If you are not familiar with this tool, or if you need to learn more about profiling and tracing using SQL Profiler, read http://msdn.microsoft.com/en-us/library/ms181091.aspx.


2.       Use the trace file generated in the previous step to create a similar load in the test database server using the Database Tuning Advisor. Ask the Tuning Advisor to give some advice (index creation advice in most cases). You are most likely to get good realistic (index creation) advice from the Tuning Advisor (because the Tuning Advisor loads the test database with the trace generated from the production database and then tried to generate the best possible indexing suggestion). Using the Tuning Advisor tool, you can also create the indexes that it suggests. If you are not familiar with the Tuning Advisor tool, or if you need to learn more about using the Tuning Advisor, read http://msdn.microsoft.com/en-us/library/ms166575.aspx.


R   Reference : One article in Tech general and by some R & D
y   
      you can write feedback on viralpala@gmail.com. 

Sunday, February 16, 2014

Save Millions of Dollars By Windows Azure to Speed up Genome Research


Hi Friends,

I just read an interesting article  and I would like to share some details with you.
Also in next few days I will share some AI (Artificial Intelligence  ) theory which I am preparing with my friend who is doing Phd in Mobile AI....

As of now below article is related to Azure....

A year ago at Virginia Tech researchers needed 2 weeks to analyze just 1 genome. Today, they can analyze 100 genomes each day. Why is this important? Scientists can learn more about our DNA and uncover more effective strategies for detecting, diagnosing, and treating diseases such as cancer. What’s helping to make this possible? An innovative solution developed by Virginia Polytechnic Institute and State University (Virginia Tech) that’s based on Windows Azure and the Windows Azure HDInsight Service.

There are currently an estimated 2,000 DNA sequencers generating around 15 petabytes of data every year.  Additionally, data volumes are doubling every 8 months, significantly ahead of Moore’s law of compute capability’s which is doubling only every 24 months.  Most institutions can’t afford to scale data centers fast enough to store and analyze all of the new information. To overcome this challenge, Virginia Tech developed a high-performance computing (HPC) solution with Windows Azure. It gives global researchers a highly scalable, on-demand IT infrastructure in the cloud that they can use to store and analyze Big Data, accelerate genome research, and increase collaboration.    

To make it easy for researchers to use the solution, Virginia Tech developed two cloud applications. One streamlines the creation of Genome Analysis Toolkit (GATK) pipelines (for DNA sequencing) using Windows Azure HDInsight. The other program simplifies the use of Hadoop MapReduce pipelines to automate data transfers and analyze information that resides on local and cloud-based systems in a hybrid scenario.

The new solution is saving Virginia Tech—and other organizations—millions of dollars because scientists pay only for the resources that they use. This includes Windows Azure Blob storage for temporary or long-term data storage and HDInsight clusters for on-demand HPC nodes. Provisioning a new resource takes just seconds.

Global scientists can also collaborate with less effort because they can now easily share insights and data sets virtually anytime, anywhere—and with any device. As a result, in the future scientists or doctors may be able to use the solution to develop custom treatments for individual patients faster, by engaging in genome analysis directly at hospitals.

Reference : SQL Server weekly ....

Monday, February 3, 2014

AngularJS (Part 1)

Are you curious about ng-JS (AngularJS)?
 AngularJS has custom elements and attributes
It uses dependency injection
AngularJS is a framework that helps you build front-ends(UI) for web-based applications.

“AngularJS is a prescriptive client-side JavaScript framework used to
make single-page web apps”



Core Features Of ng-JS (AngularJS)
Two Way Data Binding
Model View Whatever
HTML Templates
Deep Linking
Dependency Injection
Directives 
 

Two Way Data Binding.

      document.getElementById('yourName')[0].text = 'bob';

{{yourName}} -----DOM
    var yourName = 'bob';--------one place information
    ”$Scope enables two way data binding”
It makes it easy to change a value and effortlessly have it update the DOM.
No Repetition of code
No need to worry when to update DOM


MVVM Design
MVC Design Patterns
MVVM – Model View ViewModel



MVVM-How AngularJS is constructed?

 
 In the Part 2 we will explore more in details.
 If you have any comments than you can email me on viralpala@gmail.com