AWS Certified Machine Learning – Specialty Dump 01

An interactive online dictionary wants to add a widget that displays words used in similar contexts. A Machine Learning Specialist is asked to provide word features for the downstream nearest neighbor model powering the widget.
What should the Specialist do to meet these requirements?

  • A. Create one-hot word encoding vectors.
  • B. Produce a set of synonyms for every word using Amazon Mechanical Turk.
  • C. Create word embedding vectors that store edit distance with every other word.
  • D. Download word embeddings pre-trained on a large corpus.

A = could be useful if you want to train the feature within context, but this is a dictionary where words are standalone, not useful. B = obviously wrong even if you don’t know anything about ML. C = wrong as edit distance is for measuring similarity of strings (i.e. how easily could one transform to another.)


A Machine Learning Specialist is configuring Amazon SageMaker so multiple Data Scientists can access notebooks, train models, and deploy endpoints. To ensure the best operational performance, the Specialist needs to be able to track how often the Scientists are deploying models, GPU and CPU utilization on the deployed SageMaker endpoints, and all errors that are generated when an endpoint is invoked.
Which services are integrated with Amazon SageMaker to track this information? (Choose two.)

  • A. AWS CloudTrail
  • B. AWS Health
  • C. AWS Trusted Advisor
  • D. Amazon CloudWatch
  • E. AWS Config

A retail chain has been ingesting purchasing records from its network of 20,000 stores to Amazon S3 using Amazon Kinesis Data Firehose. To support training an improved machine learning model, training records will require new but simple transformations, and some attributes will be combined. The model needs to be retrained daily.
Given the large number of stores and the legacy data ingestion, which change will require the LEAST amount of development effort?

  • A. Require that the stores to switch to capturing their data locally on AWS Storage Gateway for loading into Amazon S3, then use AWS Glue to do the transformation.
  • B. Deploy an Amazon EMR cluster running Apache Spark with the transformation logic, and have the cluster run each day on the accumulating records in Amazon S3, outputting new/transformed records to Amazon S3.
  • C. Spin up a fleet of Amazon EC2 instances with the transformation logic, have them transform the data records accumulating on Amazon S3, and output the transformed records to Amazon S3.
  • D. Insert an Amazon Kinesis Data Analytics stream downstream of the Kinesis Data Firehose stream that transforms raw record attributes into simple transformed values using SQL.

A = wrong as data is already ingested into S3. B = wrong, too much work. C = wrong, too much work.


A Machine Learning Specialist is building a convolutional neural network (CNN) that will classify 10 types of animals. The Specialist has built a series of layers in a neural network that will take an input image of an animal, pass it through a series of convolutional and pooling layers, and then finally pass it through a dense and fully connected layer with 10 nodes. The Specialist would like to get an output from the neural network that is a probability distribution of how likely it is that the input image belongs to each of the 10 classes.
Which function will produce the desired output?

  • A. Dropout
  • B. Smooth L1 loss
  • C. Softmax
  • D. Rectified linear units (ReLU)

A and B = wrong, nonsense in context of this question. D = for activation only.


A Machine Learning Specialist trained a regression model, but the first iteration needs optimizing. The Specialist needs to understand whether the model is more frequently overestimating or underestimating the target.
What option can the Specialist use to determine whether it is overestimating or underestimating the target value?

  • A. Root Mean Square Error (RMSE)
  • B. Residual plots
  • C. Area under the curve
  • D. Confusion matrix

A = wrong, only tells you how “spread out” data points are around the regression line; or how concentrated your data is around the best fit line; also it is “squared” meaning the “direction of up-overestimate/down-underestimate” is lost. C and D = wrong, for binary classification effectiveness.


A company wants to classify user behavior as either fraudulent or normal. Based on internal research, a Machine Learning Specialist would like to build a binary classifier based on two features: age of account and transaction month. The class distribution for these features is illustrated in the figure provided.

Based on this information, which model would have the HIGHEST recall with respect to the fraudulent class?

A. Decision tree
B. Linear support vector machine (SVM)
C. Naive Bayesian classifier
D. Single Perceptron with sigmoidal activation function

B and C = wrong, problem is not linear. D = wrong, nonsense.

https://svivek.com/teaching/lectures/slides/naive-bayes/naive-bayes-linear.pdf

https://sebastianraschka.com/Articles/2014_naive_bayes_1.html


A Machine Learning Specialist kicks off a hyperparameter tuning job for a tree-based ensemble model using Amazon SageMaker with Area Under the ROC Curve
(AUC) as the objective metric. This workflow will eventually be deployed in a pipeline that retrains and tunes hyperparameters each night to model click-through on data that goes stale every 24 hours.
With the goal of decreasing the amount of time it takes to train these models, and ultimately to decrease costs, the Specialist wants to reconfigure the input hyperparameter range(s).
Which visualization will accomplish this?

  • A. A histogram showing whether the most important input feature is Gaussian.
  • B. A scatter plot with points colored by target variable that uses t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize the large number of input variables in an easier-to-read dimension.
  • C. A scatter plot showing the performance of the objective metric over each training iteration.
  • D. A scatter plot showing the correlation between maximum tree depth and the objective metric.

A = wrong, nonsense. B = wrong, it is for dimensionality reduction, i.e. tuning the parameter not hyperparameter. C = wrong, AUC is based on the performance of model not individual iteration.


A Machine Learning Specialist is creating a new natural language processing application that processes a dataset comprised of 1 million sentences. The aim is to then run Word2Vec to generate embeddings of the sentences and enable different types of predictions.
Here is an example from the dataset:
“The quck BROWN FOX jumps over the lazy dog.”
Which of the following are the operations the Specialist needs to perform to correctly sanitize and prepare the data in a repeatable manner? (Choose three.)

  • A. Perform part-of-speech tagging and keep the action verb and the nouns only.
  • B. Normalize all words by making the sentence lowercase.
  • C. Remove stop words using an English stopword dictionary.
  • D. Correct the typography on “quck” to “quick.”
  • E. One-hot encode all words in the sentence.
  • F. Tokenize the sentence into words.

A = wrong, not preparation but part of processing, also not relevant to this problem. D = wrong, one-off, not repeatable. E = wrong, not preparation but part of pre-processing, also one-hot encoding is not used in Word2Vec, which uses word embeddings.


A company is using Amazon Polly to translate plaintext documents to speech for automated company announcements. However, company acronyms are being mispronounced in the current documents.
How should a Machine Learning Specialist address this issue for future documents?

  • A. Convert current documents to SSML with pronunciation tags.
  • B. Create an appropriate pronunciation lexicon.
  • C. Output speech marks to guide in pronunciation.
  • D. Use Amazon Lex to preprocess the text files for pronunciation

A and C = wrong, can correct this document but not future-proof. D = wrong, nonsense.

https://docs.aws.amazon.com/polly/latest/dg/managing-lexicons.html


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s