AWS Certified Machine Learning – Specialty Dump 06

A Machine Learning Specialist observes several performance problems with the training portion of a machine learning solution on Amazon SageMaker. The solution uses a large training dataset 2 TB in size and is using the SageMaker k-means algorithm. The observed issues include the unacceptable length of time it takes before the training job launches and poor I/O throughput while training the model What should the Specialist do to address the performance issues with the current solution?

A. Use the SageMaker batch transform feature
B. Compress the training data into Apache Parquet format.
C. Ensure that the input mode for the training job is set to Pipe.
D. Copy the training dataset to an Amazon EFS volume mounted on the SageMaker instance.

A = wrong, nonsense. C = wrong, Pipe is for streaming large data sets. D = wrong, EFS performance is worse.


A Machine Learning Specialist is building a convolutional neural network (CNN) that will classify 10 types of animals. The Specialist has built a series of layers in a neural network that will take an input image of an animal, pass it through a series of convolutional and pooling layers, and then finally pass it through a dense and fully connected layer with 10 nodes The Specialist would like to get an output from the neural network that is a probability distribution of how likely it is that the input image belongs to each of the 10 classes Which function will produce the desired output?

A. Dropout
B. Smooth L1 loss
C. Softmax
D. Rectified linear units (ReLU)

Note: When you see probability distribution, go for Softmax.


An Machine Learning Specialist discover the following statistics while experimenting on a model.

What can the Specialist from the experiments?

  • A. The model In Experiment 1 had a high variance error that was reduced in Experiment 3 by regularization Experiment 2 shows that there is minimal bias error in Experiment 1
  • B. The model in Experiment 1 had a high bias error that was reduced in Experiment 3 by regularization Experiment 2 shows that there is minimal variance error in Experiment 1
  • C. The model in Experiment 1 had a high bias error and a high variance error that were reduced in Experiment 3 by regularization. Experiment 2 shows that high bias cannot be reduced by increasing layers and neurons in the model
  • D. The model in Experiment 1 had a high random noise error that was reduced in Experiment 3 by regularization Experiment 2 shows that random noise cannot be reduced by increasing layers and neurons in the model

Note: High bias = underfitting, high variance = overfitting. This model is clearly overfitting, means it remembered too much features to generalize. Overfitting is reduced be E3. E2 shows bias is not a problem, making the model more complex does not reduce its bias (i.e. increase train error).

High bias = training result is bad. Analogy: not learning much.

High variance = training result is good, testing is bad. Analogy: learn too specifically, fail to generalize.


A web-based company wants to improve its conversion rate on its landing page. Using a large historical dataset of customer visits, the company has repeatedly trained a multi-class deep learning network algorithm on Amazon SageMaker However there is an overfitting problem training data shows 90% accuracy in predictions, while test data shows 70% accuracy only The company needs to boost the generalization of its model before deploying it into production to maximize conversions of visits to purchases Which action is recommended to provide the HIGHEST accuracy model for the company’s test and validation data?

A. Increase the randomization of training data in the mini-batches used in training.
B. Allocate a higher proportion of the overall data to the training dataset
C. Apply L1 or L2 regularization and dropouts to the training.
D. Reduce the number of layers and units (or neurons) from the deep learning network.

Note: When you see overfitting, consider regularization and dropout.


A Machine Learning Specialist is building a supervised model that will evaluate customers’ satisfaction with their mobile phone service based on recent usage. The model’s output should infer whether or not a customer is likely to switch to a competitor in the next 30 days Which of the following modeling techniques should the Specialist use?

  • A. Time-series prediction
  • B. Anomaly detection
  • C. Binary classification
  • D. Regression

A and B = wrong, nonsense. D = wrong, for continuous variable prediction.


A large mobile network operating company is building a machine learning model to predict customers who are likely to unsubscribe from the service. The company plans to offer an incentive for these customers as the cost of churn is far greater than the cost of the incentive.
The model produces the following confusion matrix after evaluating on a test dataset of 100 customers:

Based on the model evaluation results, why is this a viable model for production?

A. The model is 86% accurate and the cost incurred by the company as a result of false negatives is less than the false positives.
B. The precision of the model is 86%, which is less than the accuracy of the model.
C. The model is 86% accurate and the cost incurred by the company as a result of false positives is less than the false negatives.
D. The precision of the model is 86%, which is greater than the accuracy of the model.

B and D = wrong, by definition. A = wrong, as FP = rewarding wrong customer, FN = letting go a wrong customer, so FP costs a lot less than FN.

Accuracy = Correct prediction / total predictions. How many times I got right?

Precision = TP / TP + FP. When I say it is positive, it probably is. Eliminate all the FPs.

Recall = TP / TP + FN. Analogy: I got all the positives covered. Eliminate all the FNs.


A Machine Learning Specialist is developing a custom video recommendation model for an application. The dataset used to train this model is very large with millions of data points and is hosted in an Amazon S3 bucket. The Specialist wants to avoid loading all of this data onto an Amazon SageMaker notebook instance because it would take hours to move and will exceed the attached 5 GB Amazon EBS volume on the notebook instance.
Which approach allows the Specialist to use all the data to train the model?

  • A. Load a smaller subset of the data into the SageMaker notebook and train locally. Confirm that the training code is executing and the model parameters seem reasonable. Initiate a SageMaker training job using the full dataset from the S3 bucket using Pipe input mode.
  • B. Launch an Amazon EC2 instance with an AWS Deep Learning AMI and attach the S3 bucket to the instance. Train on a small amount of the data to verify the training code and hyperparameters. Go back to Amazon SageMaker and train using the full dataset
  • C. Use AWS Glue to train a model using a small subset of the data to confirm that the data will be compatible with Amazon SageMaker. Initiate a SageMaker training job using the full dataset from the S3 bucket using Pipe input mode.
  • D. Load a smaller subset of the data into the SageMaker notebook and train locally. Confirm that the training code is executing and the model parameters seem reasonable. Launch an Amazon EC2 instance with an AWS Deep Learning AMI and attach the S3 bucket to train the full dataset.

Note: When you see data too large do not fit, think of Pipe mode.


The Chief Editor for a product catalog wants the Research and Development team to build a machine learning system that can be used to detect whether or not individuals in a collection of images are wearing the company’s retail brand. The team has a set of training data. Which machine learning algorithm should the researchers use that BEST meets their requirements?

A. Latent Dirichlet Allocation (LDA)
B. Recurrent neural network (RNN)
C. K-means
D. Convolutional neural network (CNN)

Note: CNN is for image. RNN is for translation, speech recognition, time series prediction. K-means for clustering. LDA for text classification.


A Machine Learning Specialist is using Amazon SageMaker to host a model for a highly available customer-facing application.
The Specialist has trained a new version of the model, validated it with historical data, and now wants to deploy it to production. To limit any risk of a negative customer experience, the Specialist wants to be able to monitor the model and roll it back, if needed What is the SIMPLEST approach with the LEAST risk to deploy the model and roll it back, if needed?

  • A. Create a SageMaker endpoint and configuration for the new model version. Redirect production traffic to the new endpoint by updating the client configuration. Revert traffic to the last version if the model does not perform as expected.
  • B. Create a SageMaker endpoint and configuration for the new model version. Redirect production traffic to the new endpoint by using a load balancer Revert traffic to the last version if the model does not perform as expected.
  • C. Update the existing SageMaker endpoint to use a new configuration that is weighted to send 5% of the traffic to the new variant. Revert traffic to the last version by resetting the weights if the model does not perform as expected.
  • D. Update the existing SageMaker endpoint to use a new configuration that is weighted to send 100% of the traffic to the new variant Revert traffic to the last version by resetting the weights if the model does not perform as expected.

Note: When you see A/B testing with SageMaker, think of variant.

https://aws.amazon.com/blogs/machine-learning/a-b-testing-ml-models-in-production-using-amazon-sagemaker/


A Machine Learning Specialist is working for a credit card processing company and receives an unbalanced dataset containing credit card transactions. It contains 99,000 valid transactions and 1,000 fraudulent transactions The Specialist is asked to score a model that was run against the dataset.

The Specialist has been advised that identifying valid transactions is equally as important as identifying fraudulent transactions What metric is BEST suited to score the model?

  • A. Precision
  • B. Recall
  • C. Area Under the ROC Curve (AUC)
  • D. Root Mean Square Error (RMSE)

Note: Only AUC accounts both precision and recall.


A Machine Learning Specialist is working with a large company to leverage machine learning within its products. The company wants to group its customers into categories based on which customers will and will not churn within the next 6 months. The company has labeled the data available to the Specialist.
Which machine learning model type should the Specialist use to accomplish this task?

  • A. Linear regression
  • B. Classification
  • C. Clustering
  • D. Reinforcement learning

Note: Yes/No problem = binary classification.


A Machine Learning Specialist was given a dataset consisting of unlabeled data The Specialist must create a model that can help the team classify the data into different buckets. What model should be used to complete this work?

  • A. K-means clustering
  • B. Random Cut Forest (RCF)
  • C. XGBoost
  • D. BlazingText

Note: K-means is unsupervised clustering.


A Machine Learning Specialist has built a model using Amazon SageMaker built-in algorithms and is not getting expected accurate results. The Specialist wants to use hyperparameter optimization to increase the model’s accuracy. Which method is the MOST repeatable and requires the LEAST amount of effort to achieve this?

A. Launch multiple training jobs in parallel with different hyperparameters
B. Create an AWS Step Functions workflow that monitors the accuracy in Amazon CloudWatch Logs and relaunches the training job with a defined list of hyperparameters
C. Create a hyperparameter tuning job and set the accuracy as an objective metric.
D. Create a random walk in the parameter space to iterate through a range of values that should be used for each individual hyperparameter

Note: This is a free score question.


A bank’s Machine Learning team is developing an approach for credit card fraud detection. The company has a large dataset of historical data labeled as fraudulent. The goal is to build a model to take the information from new transactions and predict whether each transaction is fraudulent or not. Which built-in Amazon SageMaker machine learning algorithm should be used for modeling this problem?

  • A. Seq2seq
  • B. XGBoost
  • C. K-means.
  • D. Random Cut Forest (RCF)

A = wrong, for language processing. C = wrong, for clustering. D = wrong, for anomaly detection.


This graph shows the training and validation loss against the epochs for a neural network. The network being trained is as follows:

  • Two dense layers one output neuron
  • 100 neurons in each layer
  • 100 epochs
  • Random initialization of weights

Which technique can be used to improve model performance in terms of accuracy in the validation set?

  • A. Early stopping
  • B. Random initialization of weights with appropriate seed
  • C. Increasing the number of epochs
  • D. Adding another layer with the 100 neurons

Note: When you see further training do not reduce errors, stop early.


While reviewing the histogram for residuals on regression evaluation data a Machine Learning Specialist notices that the residuals do not form a zero-centered bell shape as shown What does this mean?


A. The model might have prediction errors over a range of target values.
B. The dataset cannot be accurately represented using the regression model
C. There are too many variables in the model.
D. The model is predicting its target values perfectly.

Note: This is a textbook question.

https://docs.aws.amazon.com/machine-learning/latest/dg/regression-model-insights.html


A Machine Learning Specialist is using Apache Spark for pre-processing training data. As part of the Spark pipeline, the Specialist wants to use Amazon SageMaker for training a model and hosting it. Which of the following would the Specialist do to integrate the Spark application with SageMaker? (Select THREE )

A. Download the AWS SDK for the Spark environment
B. Install the SageMaker Spark library in the Spark environment.
C. Use the appropriate estimator from the SageMaker Spark Library to train a model.
D. Compress the training data into a ZIP file and upload it to a pre-defined Amazon S3 bucket.
E. Use the SageMakerModel.transform method to get inferences from the model hosted in SageMaker
F. Convert the DataFrame object to a CSV file, and use the CSV file as input for obtaining inferences from SageMaker.

Note: This is a textbook question.

https://docs.amazonaws.cn/en_us/sagemaker/latest/dg/apache-spark.html


During mini-batch training of a neural network for a classification problem, a Data Scientist notices that training accuracy oscillates.
What is the MOST likely cause of this issue?

  • A. The class distribution in the dataset is imbalanced.
  • B. Dataset shuffling is disabled.
  • C. The batch size is too big.
  • D. The learning rate is very high.

Note: This is a free score question.


A Machine Learning Specialist is designing a system for improving sales for a company. The objective is to use the large amount of information the company has on users’ behavior and product preferences to predict which products users would like based on the users’ similarity to other users.
What should the Specialist do to meet this objective?

  • A. Build a content-based filtering recommendation engine with Apache Spark ML on Amazon EMR
  • B. Build a collaborative filtering recommendation engine with Apache Spark ML on Amazon EMR.
  • C. Build a model-based filtering recommendation engine with Apache Spark ML on Amazon EMR
  • D. Build a combinative filtering recommendation engine with Apache Spark ML on Amazon EMR

Note: This is a free score question. Recommendation engine = collaborative filtering.


A Machine Learning Specialist has completed a proof of concept for a company using a small data sample, and now the Specialist is ready to implement an end- to-end solution in AWS using Amazon SageMaker. The historical training data is stored in Amazon RDS.
Which approach should the Specialist use for training a model using that data?

  • A. Write a direct connection to the SQL database within the notebook and pull data in
  • B. Push the data from Microsoft SQL Server to Amazon S3 using an AWS Data Pipeline and provide the S3 location within the notebook.
  • C. Move the data to Amazon DynamoDB and set up a connection to DynamoDB within the notebook to pull data in.
  • D. Move the data to Amazon ElastiCache using AWS DMS and set up a connection within the notebook to pull data in for fast access.

Note: Only A and B are RDS related. B is more solid. I don’t particularly like this question, answers are really vague and not reflect the question.


Given the following confusion matrix for a movie classification model, what is the true class frequency for Romance and the predicted class frequency for Adventure?





A. The true class frequency for Romance is 77.56% and the predicted class frequency for Adventure is 20.85%
B. The true class frequency for Romance is 57.92% and the predicted class frequency for Adventure is 13.12%
C. The true class frequency for Romance is 0.78 and the predicted class frequency for Adventure is 0.47
D. The true class frequency for Romance is 77.56% * 0.78 and the predicted class frequency for Adventure is 20.85%

Note: This is a textbook question. True class frequency = actual ratios of classes in dataset. Predicted class frequency = predicted ratios of classes.

https://docs.aws.amazon.com/machine-learning/latest/dg/multiclass-model-insights.html


A Machine Learning Specialist prepared the following graph displaying the results of k-means for k = [1:10]

Considering the graph, what is a reasonable selection for the optimal choice of k?

  • A. 1
  • B. 4
  • C. 7
  • D. 10

Note: You are looking for the “bending” point.

https://uc-r.github.io/kmeans_clustering

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s