AI with Python – Data Preparation

AI with Python – Data Preparation

We have already studied supervised as well as unsupervised machine learning algorithms. These algorithms require formatted data to start the training process. We must prepare or format data in a certain way so that it can be supplied as an input to ML algorithms.
This chapter focuses on data preparation for machine learning algorithms.

Preprocessing the Data

In our daily life, we deal with lots of data but this data is in raw form. To provide the data as the input of machine learning algorithms, we need to convert it into a meaningful data. That is where data preprocessing comes into picture. In other simple words, we can say that before providing the data to the machine learning algorithms we need to preprocess the data.

Data preprocessing steps

Follow these steps to preprocess the data in Python −
Step 1 − Importing the useful packages − If we are using Python then this would be the first step for converting the data into a certain format, i.e., preprocessing. It can be done as follows −

import numpy as np
import sklearn.preprocessing

Here we have used the following two packages −

  • NumPy − Basically NumPy is a general purpose array-processing package designed to efficiently manipulate large multi-dimensional arrays of arbitrary records without sacrificing too much speed for small multi-dimensional arrays.
  • Sklearn.preprocessing − This package provides many common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for machine learning algorithms.
    NumPy − Basically NumPy is a general purpose array-processing package designed to efficiently manipulate large multi-dimensional arrays of arbitrary records without sacrificing too much speed for small multi-dimensional arrays.
    Sklearn.preprocessing − This package provides many common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for machine learning algorithms.
    Step 2 − Defining sample data − After importing the packages, we need to define some sample data so that we can apply preprocessing techniques on that data. We will now define the following sample data −

    data = np.array([3.2, -3.0, 8.9],
    [-8.6, 7.6, 3.7],
    [6.2, -5.6, 5.6],
    [1.3, 4.1, -3.3])

    Step3 − Applying preprocessing technique − In this step, we need to apply any of the preprocessing techniques.
    The following section describes the data preprocessing techniques.

    Techniques for Data Preprocessing

    The techniques for data preprocessing are described below −

    Binarization

    This is the preprocessing technique which is used when we need to convert our numerical values into Boolean values. We can use an inbuilt method to binarize the input data say by using 0.5 as the threshold value in the following way −

    binarized_data = preprocessing.Binarizer(threshold = 0.5).transform(imput)
    print("nBinarized data:n", binarized_data)

    Now, after running the above code we will get the following output, all the values above 0.5(threshold value) would be converted to 1 and all the values below 0.5 would be converted to 0.
    Binarized data

    [[1. 0. 1.]
    [0. 1. 1.]
    [1. 0. 1.]
    [1. 1. 0.]]

    Mean Removal

    It is another very common preprocessing technique that is used in machine learning. Basically it is used to eliminate the mean from feature vector so that every feature is centered on zero. We can also remove the bias from the features in the feature vector. For applying mean removal preprocessing technique on the sample data, we can write the Python code shown below. The code will display the Mean and Standard deviation of the input data −

    print("Mean = ", data.mean(axis = 0))
    print("Std deviation = ", data.std(axis = 0))

    We will get the following output after running the above lines of code −

    Mean = [ 0.525       0.775      3.725 ]
    Std deviation = [ 5.55039413  5.303949  4.4622724 ]

    Now, the code below will remove the Mean and Standard deviation of the input data −

    data_scaled = preprocessing.scale(data)
    print("Mean =", data_scaled.mean(axis=0))
    print("Std deviation =", data_scaled.std(axis = 0))

    We will get the following output after running the above lines of code −

    Mean = [ -1.38777878e-17  -2.77555756e-17 -5.55111512e-17]
    Std deviation = [ 1.             1.             1.]

    Scaling

    It is another data preprocessing technique that is used to scale the feature vectors. Scaling of feature vectors is needed because the values of every feature can vary between many random values. In other words we can say that scaling is important because we do not want any feature to be synthetically large or small. With the help of the following Python code, we can do the scaling of our input data, i.e., feature vector −

    Min max scaling

    data_scaler_minmax = preprocessing.MinMaxScaler(feature_range=(0,1))
    data_scaled_minmax = data_scaler_minmax.fit_transform(data)
    print ("nMin max scaled data:n", data_scaled_minmax)

    We will get the following output after running the above lines of code −
    Min max scaled data

    [ [ 0.7972973  0.1969697   1.]
    [   0.          1.           0.57377049]
    [   1.  0.           0.7295082        ]
    [   0.66891892          0.73484848  0.        ]]

    Normalization of data

    It is another data preprocessing technique that is used to modify the feature vectors. Such kind of modification is necessary to measure the feature vectors on a common scale. Followings are two types of normalization which can be used in machine learning −
    L1 Normalization
    It is also referred to as Least Absolute Deviations. This kind of normalization modifies the values so that the sum of the absolute values is always up to 1 in each row. It can be implemented on the input data with the help of the following Python code −

    # Normalize data
    l1_normalized_data = preprocessing.normalize(input_data, norm = 'l1')
    print("nL1 normalized data:n", normalized_l1)

    The above line of code generates the following output &miuns

    L1 normalized data:
    [[ 0.21192053 -0.1986755   0.58940397]
    [-0.4321608   0.38190955  0.18592965]
    [ 0.35632184 -0.32183908  0.32183908]
    [ 0.14942529  0.47126437 -0.37931034]]

    L2 Normalization
    It is also referred to as least squares. This kind of normalization modifies the values so that the sum of the squares is always up to 1 in each row. It can be implemented on the input data with the help of the following Python code −

    # Normalize data
    l2_normalized_data = preprocessing.normalize(input_data, norm = 'l2')
    print("nL2 normalized data:n", normalized_l2)

    The above line of code will generate the following output −

    L2 normalized data:
    [[ 0.32250921 -0.30235238  0.89697873]
    [-0.71318354  0.63025523  0.30683478]
    [ 0.61643499 -0.55677999  0.55677999]
    [ 0.2397969   0.75628252 -0.6087152 ]]

    Labeling the Data

    We already know that data in a certain format is necessary for machine learning algorithms. Another important requirement is that the data must be labelled properly before sending it as the input of machine learning algorithms. For example, if we talk about classification, there are lot of labels on the data. Those labels are in the form of words, numbers, etc. Functions related to machine learning in sklearn expect that the data must have number labels. Hence, if the data is in other form then it must be converted to numbers. This process of transforming the word labels into numerical form is called label encoding.

    Label encoding steps

    Follow these steps for encoding the data labels in Python −
    Step1 − Importing the useful packages
    If we are using Python then this would be first step for converting the data into certain format, i.e., preprocessing. It can be done as follows −

    import numpy as np
    from sklearn import preprocessing

    Step 2 − Defining sample labels
    After importing the packages, we need to define some sample labels so that we can create and train the label encoder. We will now define the following sample labels −

    # Sample input labels
    i_labels = ['orange','white','purple','maroon','blue','florocent','black']

    Step 3 − Creating & training of label encoder object
    In this step, we need to create the label encoder and train it. The following Python code will help in doing this −

    # Creating the label encoder
    enc = preprocessing.LabelEncoder()
    enc.fit(i_labels)

    Following would be the output after running the above Python code −

    LabelEncoder()

    Step4 − Checking the performance by encoding random ordered list
    This step can be used to check the performance by encoding the random ordered list. Following Python code can be written to do the same −

    # encoding a set of labels
    t_labels = ['yellow','green','white']
    enc_values = encoder.transform(t_labels)
    print("nLabels =", t_labels)

    The labels would get printed as follows −

    Labels = ['purple', 'white', 'orange']

    Now, we can get the list of encoded values i.e. word labels converted to numbers as follows −

    print("Encoded values =", list(encoded_values))

    The encoded values would get printed as follows −

    Encoded values = [5, 6, 4]

    Step 5 − Checking the performance by decoding a random set of numbers −
    This step can be used to check the performance by decoding the random set of numbers. Following Python code can be written to do the same −

    # decoding a set of values
    enc_values = [5,0,4,4]
    dec_list = encoder.inverse_transform(enc_values)
    print("nEncoded values =", enc_values)

    Now, Encoded values would get printed as follows −

    Encoded values = [5, 0, 4, 4]
    print("nDecoded labels =", list(dec_list))

    Now, decoded values would get printed as follows −

    Decoded labels = ['purple', 'black', 'orange', 'orange']

    Labeled v/s Unlabeled Data

    Unlabeled data mainly consists of the samples of natural or human-created object that can easily be obtained from the world. They include, audio, video, photos, news articles, etc.
    On the other hand, labeled data takes a set of unlabeled data and augments each piece of that unlabeled data with some tag or label or class that is meaningful. For example, if we have a photo then the label can be put based on the content of the photo, i.e., it is photo of a boy or girl or animal or anything else. Labeling the data needs human expertise or judgment about a given piece of unlabeled data.
    There are many scenarios where unlabeled data is plentiful and easily obtained but labeled data often requires a human/expert to annotate. Semi-supervised learning attempts to combine labeled and unlabeled data to build better models.

AI with Python – Machine Learning (Prev Lesson)
(Next Lesson) AI with Python – Logic Programming
', { 'anonymize_ip': true });