Choose a topic to test your knowledge and improve your Machine Learning (ML) skills
showed better performance than other approaches, even without a context-based model
What is ‘Overfitting’ in Machine learning?
What is ‘Test set’?
what is the function of ‘Supervised Learning’?
During the last few years, many algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing
Common deep learning applications include
if there is only a discrete number of possible outcomes (called categories),the process becomes a .
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?
Which of the following sentence is FALSE regarding regression?
which can accept a NumPy RandomStategenerator or an integer seed.
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options
is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.
It's possible to specify if the scaling process must include both mean and standard deviation using theparameters
Which of the following selects the best K high-scorefeatures.
Which of the following selects the best K high-scorefeatures.
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias andvariance with lambda.
What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approachinginfinity
Which of the following method(s) does not haveclosed form solution for its coefficients?
Function used for linear regression in R is
In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the meantraining error?
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size oftraining data?
Multinomial Naïve Bayes Classifier is _ distribution
For the given weather data, Calculate probabilityof not playing
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizesof datasets are not best suited for SVM’s?
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we useGaussian kernel in SVM
Which of the following is not supervisedlearning?
Gaussian Naïve Bayes Classifier is _ distribution
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?
For the given weather data, what is theprobability that players will play if weather is sunny
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being aman
For the given weather data, Calculate probability
100 people are at party. Given data gives informa
What do you mean by generalization error in terms of the SVM?
The SVM’s are less effective when:
Suppose you are using RBF kernel in SVM with high Gamma valu
The cost parameter in the SVM means:
If I am using all features of my dataset and I achieve 100% accura
Which of the following are real world applications of the SVM?
We usually use feature normalization before using the Gaussian k
In reinforcement learning, this feedback is usually called as .
In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called .
When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled .
Techniques involve the usage of both labeled and unlabeled data is called .
Reinforcement learning is particularly efficient when .
During the last few years, many algorithms have been applied to deepneural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state.
if there is only a discrete number of possible outcomes (called categories),the process becomes a .
scikit-learn also provides functions for creating dummy datasets from scratch:
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters .
Which of the following selects the best K high-score features.
What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity
Multinomial Naïve Bayes Classifier is _ distribution
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s?
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM
Gaussian Naïve Bayes Classifier is _ distribution
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but~70% on validation set, what should I look out for?
We usually use feature normalization before using the Gaussian k