Perception / Computer Vision Engineer
Build the perception stack — detection, tracking, depth estimation — that lets autonomous systems understand what they see.
Perception is the #1 bottleneck for autonomy. Modern CV/ML expertise applied to noisy, real-world drone imagery pays top-of-market at every prime and startup.
CV & deep learning foundations
Classical CV + modern deep learning. You need both — neural networks don’t free you from knowing what a Jacobian is.
Nearly all research + production CV runs on PyTorch.
Geometry still matters — SLAM, VIO, SfM all come from this book.
Detection, tracking, 3D perception
Moving from image-level ML to full 3D world understanding.
The workhorse perception task on every UAV.
GPS-denied navigation starts here.
Neural scene reconstruction is starting to replace classical SfM/MVS for offline mapping, synthetic-data generation, and post-mission analysis.
Defense ISR is moving from task-specific CNNs to fine-tuned foundation models. Few-shot adaptation on small operator-labeled datasets is the new perception workflow.
Perception + sensor fusion patterns that port straight to aerial.
Every perception node in a flight stack speaks ROS image msgs.
Deployment on embedded
A model that only runs on a DGX is useless on a drone. This stage is about making it fit + fly.
Vendor cert that hiring managers recognize.
NVIDIA DLI course — hands-on Jetson deployment.