I've been testing SAM2's performance on production VFX plates with heavy motion blur—specifically, fast-moving vehicles and action sequences shot with wide-open shutters (180-degree or higher). The results reveal interesting edge cases that matter for production deployment.
Motion blur is everywhere in live-action VFX. When a subject moves quickly relative to the camera, pixels smear across multiple positions in a single frame. This makes segmentation challenging: the model needs to understand that the blurred pixels belong to the moving subject, not the background.
SAM2 handles moderate motion blur well (up to ~90-degree shutter equivalent), but I noticed temporal consistency degradation on plates with extreme blur. Specifically:
I tested on three sequences:
For each test, I recorded:
Finding 1: SAM2's temporal consistency drops significantly above 180-degree shutter angles. On the 270-degree martial arts sequence, IoU between adjacent frames dropped from 0.92 (at 90 degrees) to 0.74 (at 270 degrees).
Finding 2: The model tends to exclude motion blur trails from the mask. This is actually desirable in many compositing scenarios (you can reintroduce blur post-segmentation with a MotionBlur node), but artists need to be aware of this behavior.
Finding 3: Synthetic motion blur (CG-rendered) performs better than optical motion blur (from camera shutter). This suggests the training data may include more synthetic examples.
I'm testing two approaches to improve results on heavily blurred footage:
I'm planning to:
Overall, SAM2 is still incredibly useful for roto work, but artists should be aware that extreme motion blur remains an edge case. For those sequences, expect to add more interactive prompts and plan for manual cleanup.