Showing posts with label PCA. Show all posts
Showing posts with label PCA. Show all posts

Saturday, April 28, 2018

What's there in wine ? Part 2 PCA- validating with SPSS Modeler

Are you curious?
Will  the data set which was analysed in  Python, when tested with spss modeler give the same result?
The data of Wine and its components were given as input to the spss modeler. The process which were done in the python was done in spss modeler.( Scaling,and partition).

The same input conditions were given, keeping the customer segment as the target variable.
Feature scaling :( x- min(x)/ range (x)



The choice of the factors was done on based on the  results of python 55 % variance.



Partition : 80-20  --> Training to testing

Number of components: 2 ( the factors which were 14 were shrunk to 2)





A filter node is connected to the nugget to filter only the factors and the customer segment to perform the logistic regression.
Finally the analysis node give the results in the form of a confusion matrix.
Let us see in detail.




Here are the results .

The confusion matrix,


So what is to be  noted ?


  • The variables which were 14 in number were reduced to two factors
  • The equation of two factors were given above.
  • The logistic regression models gives the results with an accuracy of 97.23 %
  • The prediction of spss modeler for the testing set is perfect with a misclassifier of just 1 which is the same as the python.( the previous post)
Post your comments and views.

Thursday, April 26, 2018

What’ there in wine ? Principal component analysis problem- Data analytics


­
What’ there in wine ?
Which wine is suitable for a typical customer segment and what are their preferences ?
The  objective behind it is to understand the mathematics, and the datascience part behind it. This model can be replicated to any other similar business problem .
Here is a classical problem to understand the PCA- Principal component analysis. There are 178 records, 12 variables ( components to prepare the wine), which is distributed for three categories of customers.

Problem statement: Need to identify which are the variables that contribute to the preference of the customer. Identify the variables which has the maximum variance. Visualize the learning of the machine.
The task is to classify the category of customers and their taste. For each new wine the model will be used to predict to which customer segment this could be recommended.
This is an example for unsupervised learning , where we ask the machine to learn on its own without giving any instructions in between the program.
Let’s dive deep


Importance of PCA
            1.Chooses “m” variables out of “n “, where  m < n
            2. The chosen m variables explains the most of the variance in the dataset.
Now let us workout this problem in python.
As a standard process,

  •      Divide the dataset into test set and training set, where the learning made using the training set is plugged in the test set to see the results.
  • ·        Scaling the data to have uniform distance between them and the other variables. Where there are a number of modes to do scaling, here I have preferred to use standard scaling. This is available as a package  in python in sklearn.

  •  Import the PCA. Initially set the no. of components as None and after viewing the results of the PCA, we could decide the number of variables.
  •   Here it is decided as two variables which had maximum variance.
  •  After  we have got the top two variables , we shall use the logistic regression to identify the effectiveness and check whether it has classified as planned.
  •  Let us see the results, the confusion matrix.
  •     We have got a wonderful results as it has predicted 0 as 0 in 14 occasions, 1 as 1 in 15 , and 2 as 2  in 6, with a misclassification of one occassion. 
  • ·     Use matplotlib to visualize the results.