[M] Can we still use F1 for a problem with more than two classes? How?
Yes, the F1 score can be extended to multi-class classification by using one of the following methods:
- Macro F1: Calculate the F1 score for each class independently, then take the average (giving equal weight to each class).
- Micro F1: Aggregate the contributions of all classes and calculate a single F1 score (this approach is often used when you care more about overall performance than per-class performance).
- Weighted F1: Similar to macro F1 but weights each class's F1 score by the number of true instances in that class.
[M] For logistic regression, why is log loss recommended over MSE (mean squared error)?
- Log loss (cross-entropy) is the correct loss function for classification problems because it is based on probability theory and maximizes the likelihood of correct predictions. Log loss penalizes confident but incorrect predictions more severely than MSE.
- Mean squared error is better suited for regression tasks and is not ideal for classification because it treats class labels as continuous values. Using MSE in classification could lead to suboptimal decision boundaries and poor model calibration.
Summary: Log loss directly optimizes for classification by handling probabilities better than MSE, which is not well-suited for discrete outcomes.
[E] What's the motivation for RNN?
The motivation for Recurrent Neural Networks (RNNs) is to model sequential data or time series data where the order of inputs matters. Unlike traditional neural networks, which treat inputs independently, RNNs have a form of memory by using feedback loops to pass information from one step of the sequence to the next. This allows RNNs to capture dependencies across time or steps, which is essential for tasks like language modeling, speech recognition, and time series prediction.
[E] What's the motivation for LSTM?
The motivation for Long Short-Term Memory (LSTM) networks is to overcome the vanishing gradient problem that affects standard RNNs when learning long-range dependencies. In standard RNNs, gradients can shrink or explode during backpropagation, making it difficult to capture long-term dependencies in sequences. LSTMs address this by using gated mechanisms (input, forget, and output gates) to control the flow of information and allow the network to retain or forget information over long periods, making them more effective for long-term dependencies in tasks like machine translation and time series forecasting.
[M] Why do we need word embeddings?
Word embeddings are needed because they provide a way to represent words as dense vectors in a continuous vector space, capturing their meanings based on their relationships with other words. Traditional representations like one-hot encoding are sparse and don't capture semantic meaning or relationships between words. Word embeddings solve this by:
- Capturing semantic relationships: Words with similar meanings have similar vector representations.
- Reducing dimensionality: Word embeddings compress large vocabulary sizes into low-dimensional vectors while retaining meaningful information.
- Enabling generalization: Embeddings help models generalize better across different words, even those not explicitly seen during training.
[M] What's the difference between count-based and prediction-based word embeddings?
-
Count-based embeddings (e.g., TF-IDF , Latent Semantic Analysis ): These embeddings are based on word co-occurrence statistics. They analyze how frequently words appear in a given context and construct embeddings based on these counts. These methods are simple but may not capture complex semantic relationships as effectively.
-
Prediction-based embeddings (e.g., word2vec , GloVe): These embeddings are learned by predicting the context of a word given its neighbors (or vice versa). Prediction-based models capture more nuanced relationships because they are trained to optimize for context prediction rather than simple co-occurrence counts.
Summary: Count-based methods focus on word frequency statistics, while prediction-based methods focus on optimizing context-based predictions.
Language Model Choice for Small Dataset
[E] Your client wants you to train a language model on their dataset but their dataset is very small with only about 10,000 tokens. Would you use an n-gram or a neural language model?
Given that the dataset is very small (only 10,000 tokens), an n-gram model would be a more suitable choice. Neural language models, such as recurrent neural networks (RNNs) or transformers, typically require large amounts of data to perform well because they have a large number of parameters to train. In contrast, an n-gram model, which is based on counting sequences of words, is more effective on smaller datasets since it doesn't require as much data to estimate probabilities of word sequences. However, an n-gram model might still face sparsity issues, and techniques like smoothing could help alleviate this.
Context Length in N-gram Models
[E] For n-gram language models, does increasing the context length (n) improve the model's performance? Why or why not?
Increasing the context length nnn in n-gram models can improve performance because it allows the model to consider a longer history of preceding words, which generally provides more context for predicting the next word. However, this improvement comes at the cost of:
- Data sparsity: As nnn increases, the number of possible n-grams grows exponentially, and with a limited dataset, many n-grams may not appear frequently or at all. This leads to data sparsity and poor probability estimates.
- Overfitting: The model may overfit the training data by relying too heavily on specific n-grams that appear in the dataset but do not generalize well to unseen data.
To balance performance and sparsity, smoothing techniques (like Kneser-Ney or Laplace smoothing) are often used.
Softmax in Word-level Language Models
[M] What problems might we encounter when using softmax as the last layer for word-level language models? How do we fix it?
Problems with softmax in word-level language models:
- Computational cost: For large vocabularies, computing the softmax over all possible words becomes computationally expensive because softmax normalizes over the entire vocabulary.
- Memory inefficiency: Storing the weights for each output word can require a large amount of memory, especially when dealing with vocabularies in the tens or hundreds of thousands of words.
Fixes:
- Hierarchical softmax: This reduces the computational complexity by using a tree-based structure where the softmax is computed over a hierarchy of words instead of the entire vocabulary.
- Negative sampling: Used in models like word2vec, this method approximates the softmax by sampling a few negative classes, reducing the number of computations.
- Sampled softmax: Similar to negative sampling, it approximates the softmax by sampling from a subset of words rather than considering the entire vocabulary.
BLEU Score for Machine Translation
[M] BLEU is a popular metric for machine translation. What are the pros and cons of BLEU?
Pros:
- Simple and efficient: BLEU is easy to calculate and widely used, providing a quick way to evaluate the quality of machine-translated text.
- N-gram precision: It considers n-gram precision, rewarding translations that match the reference text at the word or phrase level.
- Standard benchmark: BLEU is a standard benchmark for comparing machine translation systems, enabling consistent evaluation across different models.
Cons:
- Ignores meaning: BLEU focuses purely on surface-level word matches and n-gram precision, which may not reflect whether the translation captures the overall meaning or intent of the original sentence.
- Sensitive to exact matches: BLEU penalizes translations that use synonyms or paraphrases that convey the same meaning but don't match the reference exactly.
- Shortcomings in fluency: It doesn't measure fluency or coherence across sentences and fails to capture grammatical correctness.