Duration: 15-20 minutes
| Technique | Description |
|---|---|
| A. Transfer Learning | 1. Uses task-specific knowledge to predict new, unseen categories |
| B. Active Learning | 2. Selects the most informative unlabeled samples for annotation |
| C. Zero-Shot Learning | 3. Integrates prior biophysical or ecological models |
| D. Process-Aware Learning | 4. Uses pretrained model weights as initialization |
| Technique | Description |
|---|---|
| A. Weakly Supervised Learning | 1. Combines a small amount of labeled data with a large amount of unlabeled data |
| B. Semi-Supervised Learning | 2. Uses coarse, noisy, or incomplete labels to train models |
| C. Self-Supervised Learning | 3. Learns representations using pseudo-labels or pretext tasks |
| D. Few-Shot Learning | 4. Learns to generalize from very few examples per class |
| Technique | Example use case |
|---|---|
| A. Active Learning | 1. A system flags uncertain areas in an image for expert labeling |
| B. Ensemble Learning | 2. Multiple models trained on different seasons are combined |
| C. Process-Aware Learning | 3. A model learns using crop growth simulations alongside satellite data |
| D. Zero-Shot Learning | 4. A model identifies a new crop class it wasn’t trained on |
| Technique | Solves which challenge? |
|---|---|
| A. Few-Shot Learning | 1. Learning with limited samples for rare crop types |
| B. Transfer Learning | 2. Adapting models trained in one region to another |
| C. Active Learning | 3. Reducing annotation costs by prioritizing sample selection |
| D. Ensemble Learning | 4. Improving model robustness and generalization |
| Concept | Description |
|---|---|
| A. Spatial k-Fold CV | 1. Ensures validation data are geographically independent from training |
| B. Random Sampling | 2. Often results in spatial data leakage |
| C. Cross-Validation Fold | 3. A subset used for model evaluation |
| D. Model Generalisation | 4. Ability to apply a model to new, unseen regions |
Duration: 10-15 minutes