5 Commits

Author SHA1 Message Date
Roman Rädle
d2f83d0d54 [sam2][demo][1/x] Remove adding object constraint after popagation
Summary:

PR #486 changed propagation to track each object independently. This allows to add objects after the initial propagation.

The frontend has a constraint that would prevent users from adding new objects after the initial "tracking". Now that the model supports tracking objects independently, we can remove this constraint from the UI.

Test Plan:

`cd demo/frontend`

`yarn lint`
```
(base) ➜ demo/frontend $ yarn lint                                                                                                                ⎇ remotes/origin/HEAD*
yarn run v1.18.0
$ eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0
  Done in 16.98s.
```
2024-12-24 16:33:40 -08:00
Ronghang Hu
393ae336a7
SAM 2 Update 12/11/2024 -- full model compilation for a major VOS speedup and a new SAM2VideoPredictor to better handle multi-object tracking (#486)
This PR provides new features and updates for SAM 2:

- We now support `torch.compile` of the entire SAM 2 model on videos, which can be turned on by setting `vos_optimized=True` in `build_sam2_video_predictor` (it uses the new `SAM2VideoPredictorVOS` predictor class in `sam2/sam2_video_predictor.py`).
  * Compared to the previous setting (which only compiles the image encoder backbone), the new full model compilation gives a major speedup in inference FPS.
  * In the VOS prediction script `tools/vos_inference.py`, you can specify this option in `tools/vos_inference.py` via the `--use_vos_optimized_video_predictor` flag.
  * Note that turning on this flag might introduce a small variance in the predictions due to numerical differences caused by `torch.compile` of the full model.
  * **PyTorch 2.5.1 is the minimum version for full support of this feature**. (Earlier PyTorch versions might run into compilation errors in some cases.) Therefore, we have updated the minimum PyTorch version to 2.5.1 accordingly in the installation scripts.
- We also update the implementation of the `SAM2VideoPredictor` class for the SAM 2 video prediction in `sam2/sam2_video_predictor.py`, which allows for independent per-object inference. Specifically, in the new `SAM2VideoPredictor`:
  * Now **we handle the inference of each object independently** (as if we are opening a separate session for each object) while sharing their backbone features.
  * This change allows us to relax the assumption of prompting for multi-object tracking. Previously (due to the batching behavior in inference), if a video frame receives clicks for only a subset of objects, the rest of the (non-prompted) objects are assumed to be non-existent in this frame (i.e., in such frames, the user is telling SAM 2 that the rest of the objects don't appear). Now, if a frame receives clicks for only a subset of objects, we do not make any assumptions about the remaining (non-prompted) objects (i.e., now each object is handled independently and is not affected by how other objects are prompted). As a result, **we allow adding new objects after tracking starts** after this change (which was previously a restriction on usage).
  * We believe that the new version is a more natural inference behavior and therefore switched to it as the default behavior. The previous implementation of `SAM2VideoPredictor` is backed up to in `sam2/sam2_video_predictor_legacy.py`. All the VOS inference results using `tools/vos_inference.py` should remain the same after this change to the `SAM2VideoPredictor` class.
2024-12-11 15:00:55 -08:00
Roman Rädle
ff9704fc0e [sam2][demo][1/x] Fix file upload
Summary:

The Strawberry GraphQL library recently disabled multipart requests by default. This resulted in a video upload request returning "Unsupported content type" instead of uploading the video, processing it, and returning the video path.

This issue was raised in #361. A forward fix is to add `multipart_uploads_enabled=True` to the endpoint view.

Test Plan:

Tested locally with cURL and upload succeeds

*Request*

```
curl http://localhost:7263/graphql \
  -F operations='{ "query": "mutation($file: Upload!){ uploadVideo(file: $file) { path } }", "variables": { "file": null } }' \
  -F map='{ "file": ["variables.file"] }' \
  -F file=@video.mov
```

*Response*

```
{"data": {"uploadVideo": {"path": "uploads/<HASH>.mp4"}}}
```
2024-10-08 14:58:28 -07:00
Ronghang Hu
98fcb164bf
Update links after renaming the repo from segment-anything-2 to sam2 (#341)
This PR update repo links after we renamed the repo from `segment-anything-2` to `sam2`. It also changes `NAME` in setup.py to `SAM-2` (which is already the named used in pip setup since python packages don't allow whitespace)
2024-09-30 20:27:44 -07:00
Haitham Khedr
aa9b8722d0 SAM2.1
SAM2.1 checkpoints + training code + Demo
2024-09-29 05:49:56 +00:00