TensorFlow with GPU QUIZ (MCQ QUESTIONS AND ANSWERS)

Total Correct: 0

Time:20:00

Question: 1

Which TensorFlow API can be used to distribute training across multiple GPUs on a single machine?

Question: 2

What is the recommended way to utilize multiple GPUs in TensorFlow for parallel processing?

Question: 3

How can you set the memory growth option for all available GPUs in TensorFlow?

Question: 4

What does tf.distribute.MirroredStrategy do during training with multiple GPUs?

Question: 5

How does tf.config.experimental.set_visible_devices() help in GPU management?

Question: 6

How does tf.distribute.MirroredStrategy distribute variables during training?

Question: 7

What is the role of tf.config.experimental.set_logical_device_configuration()?

Question: 8

In TensorFlow, how can you specify which GPU to use for computations?

Question: 9

What does tf.config.experimental.set_virtual_device_configuration() allow in TensorFlow?

Question: 10

When using tf.distribute.MirroredStrategy(), how are variables updated during training?

Question: 11

Which TensorFlow module facilitates the distribution of computations across multiple devices?

Question: 12

What is the primary purpose of tf.config.experimental.set_memory_growth()?

Question: 13

Which TensorFlow function is used to explicitly assign a GPU device to an operation?

Question: 14

What happens if a tensor is moved between devices using tf.identity() without modifying the tensor?

Question: 15

How does tf.config.experimental.list_physical_devices('GPU') help in GPU management?

Question: 16

Which function is used to explicitly assign operations to CPU in TensorFlow?

Question: 17

In TensorFlow, what is the purpose of tf.device()?

Question: 18

What is the role of tf.config.experimental.set_memory_growth() in GPU management?

Question: 19

When utilizing multiple GPUs with MirroredStrategy, how are variables updated?

Question: 20

Which TensorFlow feature facilitates automatic distribution of computations across multiple GPUs?

Question: 21

What does tf.identity() do when moving tensors between CPU and GPU?

Question: 22

Which TensorFlow function is used to explicitly assign operations to a specific device, such as CPU or GPU?

Question: 23

What is the purpose of tf.config.experimental.set_visible_devices()?

Question: 24

Which function can be used to utilize multiple GPUs for parallel processing in TensorFlow?

Question: 25

How can tensors be moved between CPU and GPU in TensorFlow?

Question: 26

What method can be used for GPU device management in TensorFlow?

Question: 27

What is the purpose of tf.config.experimental.set_memory_growth()?

Question: 28

Which TensorFlow module provides support for distributing training across multiple machines?

Question: 29

How can you check the available physical GPUs in TensorFlow?

Question: 30

What is the advantage of using tf.distribute.MirroredStrategy over manual GPU management?