2 Commits

Author SHA1 Message Date
Ronghang Hu
c61e2475e6 switch to a new implementation of the class SAM2VideoPredictor for per-object inference sam2/sam2_video_predictor.py
In this PR, we switch to a new implementation of the class `SAM2VideoPredictor` for in sam2/sam2_video_predictor.py, which allows for independent per-object inference.

Specifically, the new `SAM2VideoPredictor`:
* it handles the inference of each object separately, as if we are opening a separate session for each object
* it relaxes the assumption on prompting
  * previously if a frame receives clicks only for a subset of objects, the rest of (non-prompted) objects are assumed to be non-existing in this frame
  * now if a frame receives clicks only for a subset of objects, we don't make any assumptions for the remaining (non-prompted) objects
* it allows adding new objects after tracking starts
* (The previous implementation is backed up to `SAM2VideoPredictor` in sam2/sam2_video_predictor_legacy.py)

Also, fix a small typo `APP_URL` => `API_URL` in the doc.

Test plan: tested with the predictor notebook `notebooks/video_predictor_example.ipynb` and VOS script `tools/vos_inference.py`. Also tested with the demo.
2024-12-11 07:13:06 +00:00
Haitham Khedr
aa9b8722d0 SAM2.1
SAM2.1 checkpoints + training code + Demo
2024-09-29 05:49:56 +00:00