mirror of
				https://github.com/facebookresearch/sam2.git
				synced 2025-11-04 11:32:12 +08:00 
			
		
		
		
	This PR provides new features and updates for SAM 2: - We now support `torch.compile` of the entire SAM 2 model on videos, which can be turned on by setting `vos_optimized=True` in `build_sam2_video_predictor` (it uses the new `SAM2VideoPredictorVOS` predictor class in `sam2/sam2_video_predictor.py`). * Compared to the previous setting (which only compiles the image encoder backbone), the new full model compilation gives a major speedup in inference FPS. * In the VOS prediction script `tools/vos_inference.py`, you can specify this option in `tools/vos_inference.py` via the `--use_vos_optimized_video_predictor` flag. * Note that turning on this flag might introduce a small variance in the predictions due to numerical differences caused by `torch.compile` of the full model. * **PyTorch 2.5.1 is the minimum version for full support of this feature**. (Earlier PyTorch versions might run into compilation errors in some cases.) Therefore, we have updated the minimum PyTorch version to 2.5.1 accordingly in the installation scripts. - We also update the implementation of the `SAM2VideoPredictor` class for the SAM 2 video prediction in `sam2/sam2_video_predictor.py`, which allows for independent per-object inference. Specifically, in the new `SAM2VideoPredictor`: * Now **we handle the inference of each object independently** (as if we are opening a separate session for each object) while sharing their backbone features. * This change allows us to relax the assumption of prompting for multi-object tracking. Previously (due to the batching behavior in inference), if a video frame receives clicks for only a subset of objects, the rest of the (non-prompted) objects are assumed to be non-existent in this frame (i.e., in such frames, the user is telling SAM 2 that the rest of the objects don't appear). Now, if a frame receives clicks for only a subset of objects, we do not make any assumptions about the remaining (non-prompted) objects (i.e., now each object is handled independently and is not affected by how other objects are prompted). As a result, **we allow adding new objects after tracking starts** after this change (which was previously a restriction on usage). * We believe that the new version is a more natural inference behavior and therefore switched to it as the default behavior. The previous implementation of `SAM2VideoPredictor` is backed up to in `sam2/sam2_video_predictor_legacy.py`. All the VOS inference results using `tools/vos_inference.py` should remain the same after this change to the `SAM2VideoPredictor` class.
		
			
				
	
	
		
			65 lines
		
	
	
		
			2.0 KiB
		
	
	
	
		
			Docker
		
	
	
	
	
	
			
		
		
	
	
			65 lines
		
	
	
		
			2.0 KiB
		
	
	
	
		
			Docker
		
	
	
	
	
	
ARG BASE_IMAGE=pytorch/pytorch:2.5.1-cuda12.1-cudnn9-runtime
 | 
						|
ARG MODEL_SIZE=base_plus
 | 
						|
 | 
						|
FROM ${BASE_IMAGE}
 | 
						|
 | 
						|
# Gunicorn environment variables
 | 
						|
ENV GUNICORN_WORKERS=1
 | 
						|
ENV GUNICORN_THREADS=2
 | 
						|
ENV GUNICORN_PORT=5000
 | 
						|
 | 
						|
# SAM 2 environment variables
 | 
						|
ENV APP_ROOT=/opt/sam2
 | 
						|
ENV PYTHONUNBUFFERED=1
 | 
						|
ENV SAM2_BUILD_CUDA=0
 | 
						|
ENV MODEL_SIZE=${MODEL_SIZE}
 | 
						|
 | 
						|
# Install system requirements
 | 
						|
RUN apt-get update && apt-get install -y --no-install-recommends \
 | 
						|
    ffmpeg \
 | 
						|
    libavutil-dev \
 | 
						|
    libavcodec-dev \
 | 
						|
    libavformat-dev \
 | 
						|
    libswscale-dev \
 | 
						|
    pkg-config \
 | 
						|
    build-essential \
 | 
						|
    libffi-dev
 | 
						|
 | 
						|
COPY setup.py .
 | 
						|
COPY README.md .
 | 
						|
 | 
						|
RUN pip install --upgrade pip setuptools
 | 
						|
RUN pip install -e ".[interactive-demo]"
 | 
						|
 | 
						|
# https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite/issues/69#issuecomment-1826764707
 | 
						|
RUN rm /opt/conda/bin/ffmpeg && ln -s /bin/ffmpeg /opt/conda/bin/ffmpeg
 | 
						|
 | 
						|
# Make app directory. This directory will host all files required for the
 | 
						|
# backend and SAM 2 inference files.
 | 
						|
RUN mkdir ${APP_ROOT}
 | 
						|
 | 
						|
# Copy backend server files
 | 
						|
COPY demo/backend/server ${APP_ROOT}/server
 | 
						|
 | 
						|
# Copy SAM 2 inference files
 | 
						|
COPY sam2 ${APP_ROOT}/server/sam2
 | 
						|
 | 
						|
# Download SAM 2.1 checkpoints
 | 
						|
ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_tiny.pt
 | 
						|
ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_small.pt
 | 
						|
ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_base_plus.pt
 | 
						|
ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_large.pt
 | 
						|
 | 
						|
WORKDIR ${APP_ROOT}/server
 | 
						|
 | 
						|
# https://pythonspeed.com/articles/gunicorn-in-docker/
 | 
						|
CMD gunicorn --worker-tmp-dir /dev/shm \
 | 
						|
    --worker-class gthread app:app \
 | 
						|
    --log-level info \
 | 
						|
    --access-logfile /dev/stdout \
 | 
						|
    --log-file /dev/stderr \
 | 
						|
    --workers ${GUNICORN_WORKERS} \
 | 
						|
    --threads ${GUNICORN_THREADS} \
 | 
						|
    --bind 0.0.0.0:${GUNICORN_PORT} \
 | 
						|
    --timeout 60
 |