Feature Selection

What is Feature Selection?

Feature selection is required for machine learning. It is the process of selecting a subset of relevant features (variables or predictors) for use in model construction. Its techniques are used for a variety of reasons:

  • To simplify models that are easier for researchers/users to interpret
  • Shorter training times
  • To avoid disadvantages of dimensionality
  • Improving data compatibility with a learning model class
  • Encoding inherent symmetries located in the input space

Feature Selection" is also called "Variable Selection", "Attribute Selection" or "Variable Subset Selection".

Data may be redundant or irrelevant. Feature Selection can be used to disregard data that is not needed. There should also be a distinction from Feature Extraction. Feature Extraction creates new features from functions of the original features. Feature Selection, on the other hand, returns a subset of features. Feature Selection techniques are often used when there are relatively many features and comparatively few examples or data. Examples of applications of feature selection include the analysis of written text and DNA microarray data, where there are thousands of features and a few hundred patterns.

You can make a Feature Selection Algorithm as a combination of search technologies for new feature subsets, while there is an evaluation measurement that awards points for different feature sets. For example, the simplest algorithm is the test that finds a minimised error rate. The choice of evaluation metric strongly influences the algorithm and there is this evaluation metric that distinguishes three different selection algorithms: Wrappers, Filters and Embedded Methods.

What problem does Feature Selection solve?

Feature selection methods can be used to create accurate predictive models. They help to select features that give good or better accuracy while requiring less data. Thus, the appropriate feature selection methods can be used to identify and remove unneeded, irrelevant and redundant attributes from data. This does not reduce the accuracy of a predictive model. It reduces the complexity of a model and makes it easier to understand.

The advantages

Memory can be saved and the calculation can be accelerated.

What must be taken into account?

It is important that a better understanding is formed of what data is used and what features are such that they are not used further. It must be studied what information is needed for the future. Irrelevant information that has no impact should be removed. Simplifying the model should make it easier to understand.

Data Navigator Newsletter