R4RIN
Articles
Java 8
MCQS
Machine Learning (ML) MCQ Quiz Hub
Machine Learning (ML) MCQ Set 08
Choose a topic to test your knowledge and improve your Machine Learning (ML) skills
1. showed better performance than other approaches, even without a context-based model
machine learning
deep learning
reinforcement learning
supervised learning
2. What is ‘Overfitting’ in Machine learning?
when astatistical model describes random error or noise instead of
robots areprogramed so that they can perform the task based on data they gather from
while involving the process of learning ‘overfitting’ occurs.
a set of data is used to discover the potentially predictive relationship
3. What is ‘Test set’?
test set is used to test the accuracy of the hypotheses generated by the learner.
it is a set of data is used to discover the potentially predictive relationship.
both a & b
none of above
4. what is the function of ‘Supervised Learning’?
classifications, predict time series, annotate strings
speech recognition, regression
both a & b
none of above
5. During the last few years, many algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing
logical
classical
classification
none of above
6. Common deep learning applications include
image classification, real-time visual tracking
autonomous car driving, logistic optimization
bioinformatics, speech recognition
All of the above
7. if there is only a discrete number of possible outcomes (called categories),the process becomes a .
regression
classification
modelfree
categories
8. Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?
all categories of categorical variable are not present in the test dataset.
frequency distribution of categories is different in train as compared to the test dataset.
train and test always have same distribution.
both a and b
9. Which of the following sentence is FALSE regarding regression?
it relates inputs to outputs.
it is used for prediction.
it may be used forinterpretation.
it discovers causalrelationships.
10. which can accept a NumPy RandomStategenerator or an integer seed.
make_blobs
random_state
test_size
training_size
11. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options
1
2
3
4
12. is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.
removing the whole line
creating sub- model to predict those features
using an automatic strategy to input them according to the other known values
All of the above
13. It's possible to specify if the scaling process must include both mean and standard deviation using theparameters
with_mean=tru e/false
with_std=true/ false
both a & b
none of the mentioned
14. Which of the following selects the best K high-scorefeatures.
selectpercentile
featurehasher
selectkbest
all above
15. Which of the following selects the best K high-scorefeatures.
selectpercentile
featurehasher
selectkbest
all above
16. Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias andvariance with lambda.
in case of very large lambda; bias is low, variance islow
in case of very large lambda; bias is low, variance ishigh
in case of very large lambda; bias is high, variance islow
in case of very large lambda; bias is high, variance ishigh
17. What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approachinginfinity
1 and 3
1 and 4
2 and 3
2 and 4
18. Which of the following method(s) does not haveclosed form solution for its coefficients?
idgeregression
lasso
both ridgeand lasso
none of both
19. Function used for linear regression in R is
lm(formula, data)
lr(formula, data)
lrm(formula, data)
regression.linear (formula, data)
20. In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to
(x-intercept, slope)
(slope, x- intercept)
(y-intercept, slope)
(slope, y- intercept)
21. Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
relation between the x1 and y is weak
relation between the x1 and y is strong
relation between the x1 and y is neutral
correlation can’t judge the relationship
22. We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the meantraining error?
increase
decrease
remain constant
can’t say
23. We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size oftraining data?
bias increases and variance increases
bias decreases and variance increases
bias decreases and variance decreases
bias increases and variance decreases
24. Multinomial Naïve Bayes Classifier is _ distribution
continuous
discrete
binary
None of these
25. For the given weather data, Calculate probabilityof not playing
0.4
0.64
0.36
0.5
26. The minimum time complexity for training an SVM is O(n2). According to this fact, what sizesof datasets are not best suited for SVM’s?
large datasets
small datasets
medium sized datasets
size does not matter
27. We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we useGaussian kernel in SVM
1
1 and 2
1 and 3
2 and 3
28. Which of the following is not supervisedlearning?
pca
decisiontree
naivebayesian
linerarregression
29. Gaussian Naïve Bayes Classifier is _ distribution
continuous
discrete
binary
All of the above
30. If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?
underfitting
nothing, the model is perfect
overfitting
None of These
31. For the given weather data, what is theprobability that players will play if weather is sunny
0.5
0.26
0.73
0.6
32. 100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being aman
0.4
0.2
0.6
0.45
33. For the given weather data, Calculate probability
0.4
0.36
0.36
0.5
34. 100 people are at party. Given data gives informa
0.4
0.2
0.6
0.45
35. What do you mean by generalization error in terms of the SVM?
how far the hy B. C. t
how accuratel
he threshold amount of error it
None of the above
36. The SVM’s are less effective when:
the data is line
the data is cl
the data is noisy and contains
None of These
37. Suppose you are using RBF kernel in SVM with high Gamma valu
the model wo
uthe model wo
the model wou
none of the ab
38. The cost parameter in the SVM means:
the number of cross- validations to be made
the kernel to be used
the tradeoff between misclassificati on and simplicity of the model
none of the above
39. If I am using all features of my dataset and I achieve 100% accura
underfitting
nothing, the m
overfitting
None of the above
40. Which of the following are real world applications of the SVM?
text and hype
image classifi
clustering of n
all of the above
41. We usually use feature normalization before using the Gaussian k
e 1
1 and 2
1 and 3
2 and 3
42. In reinforcement learning, this feedback is usually called as .
overfitting
overlearning
reward
none of above
43. In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called .
deep learning
machine learning
reinforcement learning
unsupervised learning
44. When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled .
overfitting
overlearning
classification
regression
45. Techniques involve the usage of both labeled and unlabeled data is called .
supervised
semi- supervised
unsupervised
none of the above
46. Reinforcement learning is particularly efficient when .
A. the environment is not completely deterministic
its often very dynamic
its impossible to have a precise error measure
All of the above
47. During the last few years, many algorithms have been applied to deepneural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state.
logical
classical
classification
None of the above
48. if there is only a discrete number of possible outcomes (called categories),the process becomes a .
regression
classification
modelfree
categories
49. scikit-learn also provides functions for creating dummy datasets from scratch:
make_classifica tion()
make_regressio n()
make_blobs()
All of the above
50. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options
1
2
3
4
51. It's possible to specify if the scaling process must include both mean and standard deviation using the parameters .
with_mean=tru e/false
with_std=true/ false
both a & b
none of the mentioned
52. Which of the following selects the best K high-score features.
selectpercentil e
featurehasher
selectkbest
All of the above
53. What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity
1 and 3
1 and 4
2 and 3
2 and 4
54. Multinomial Naïve Bayes Classifier is _ distribution
continuous
discrete
binary
None of These
55. The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s?
large datasets
small datasets
medium sized datasets
size does not matter
56. We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM
1
1 and 2
1 and 3
2 and 3
57. Gaussian Naïve Bayes Classifier is _ distribution
continuous
discrete
binary
None of These
58. If I am using all features of my dataset and I achieve 100% accuracy on my training set, but~70% on validation set, what should I look out for?
underfitting
nothing, the model is perfect
overfitting
None of these
59. We usually use feature normalization before using the Gaussian k
e 1
1 and 2
1 and 3
2 and 3
Submit