Neural Networks in TensorFlow QUIZ (MCQ QUESTIONS AND ANSWERS)

Total Correct: 0

Time:20:00

Question: 1

Which of the following is NOT a benefit of using batch normalization?

Question: 2

What is the primary purpose of using activation functions in neural networks?

Question: 3

What is the primary purpose of using Leaky ReLU activation function?

Question: 4

Which of the following activation functions is NOT symmetric around the origin?

Question: 5

Which of the following initialization strategies involves sampling weights from a uniform distribution?

Question: 6

What is the primary purpose of using batch normalization along with dropout layers in a neural network?

Question: 7

Which of the following activation functions is prone to the "dying ReLU" problem?

Question: 8

Which initialization strategy involves setting all weights to zero?

Question: 9

What is the primary purpose of using dropout layers in neural networks?

Question: 10

Which activation function is commonly used for multi-class classification tasks in the output layer of a neural network?

Question: 11

What is the primary purpose of using Xavier (Glorot) initialization?

Question: 12

Which of the following is a technique used to prevent the saturation of neurons and speed up the training of deep neural networks?

Question: 13

What is the main drawback of using the sigmoid activation function?

Question: 14

Which of the following initialization strategies is designed to address the vanishing/exploding gradient problem?

Question: 15

Which initialization strategy involves sampling weights from a Gaussian distribution with zero mean and a specified variance?

Question: 16

Which of the following is NOT a commonly used activation function in neural networks?

Question: 17

What does the dropout rate represent in a dropout layer?

Question: 18

Which of the following is a purpose of using an activation function in a neural network?

Question: 19

What does the 'ReLU' activation function stand for?

Question: 20

Which of the following is a drawback of using the ReLU activation function?

Question: 21

Which activation function is preferred for the output layer of a binary classification neural network?

Question: 22

Which of the following initialization strategies is commonly used for neural network weights?

Question: 23

Which layer type is used to prevent overfitting by randomly dropping units during training?

Question: 24

What is the primary purpose of batch normalization in neural networks?

Question: 25

Which of the following activation functions is not continuous and differentiable?

Question: 26

Which of the following is a common strategy to determine the initial learning rate in training neural networks?

Question: 27

What is the primary purpose of using weight initialization techniques in neural networks?

Question: 28

Which of the following activation functions is NOT bounded?

Question: 29

What is the primary purpose of using L1 and L2 regularization techniques in neural networks?

Question: 30

Which of the following is a common technique used to address the vanishing gradient problem?