NVIDIA NCA-GENM - Generative AI Multimodal Exam
Page: 2 / 12
Total 60 questions
Question #6 (Topic: Topic 1, Core machine learning and AI knowledge
)
In large-language models, what is the purpose of the attention mechanism?
A. To measure the importance of the words in the output sequence.
B. To assign weights to each word in the input sequence.
C. To determine the order in which words are generated.
D. To capture the order of the words in the input sequence.
Answer: B
Question #7 (Topic: Topic 1, Core machine learning and AI knowledge
)
What is the purpose of a kernel in a Convolutional Neural Network (CNN)?
A. To normalize the input data.
B. To classify the data into different categories.
C. To perform convolution operations on input data.
D. To calculate the loss function.
Answer: C
Question #8 (Topic: Topic 1, Core machine learning and AI knowledge
)
In LLM evaluation, what does “zero-shot learning” refer to?
A. The model’s ability to learn from zero examples
B. A technique to reduce training time to zero
C. The model’s performance after extensive training
D. The model’s ability to perform tasks it has not been explicitly trained on
Answer: D
Question #9 (Topic: Topic 1, Core machine learning and AI knowledge
)
Which of the following tasks can be performed using the transformer LLM encoder model?
A. Image generation
B. Generating code
C. Speech recognition
D. Semantic analysis
Answer: D
Question #10 (Topic: Topic 1, Core machine learning and AI knowledge
)
Which of the following statements about Transformer-based LLMs is true?
A. Transformer-based LLMs can generate text-based data but cannot control the output.
B. Transformer-based LLMs can only be used for text classification tasks.
C. Transformer-based LLMs cannot manipulate or analyze text-based data.
D. Transformer-based LLMs can generate text-based data and can control the output.
Answer: D