🦱 Real Facial Image&Video Detection
Against Face Forgery (Deepfake/Diffusion) and Spoofing (Presentation-attacks)
☉ Powered by the fine-tuned ViT models that is pre-trained from FSFM-3C
☉ We do not and cannot access or store the data you have uploaded!
☉ Release (Continuously updating [by Gaojian Wang/汪高健, Tong Wu/吴桐, Xingtang Luo/罗兴塘])
[V1.0] 2025/02/22-Current🎉: 1) Updated [✨Unified-detector_v1] for Physical-Digital Face Attack&Forgery Detection, a ViT-B/16-224 (FSFM Pre-trained) detector that could identify Real&Bonafide, Deepfake, Diffusion&AIGC, Spooing&Presentation-attacks facial images or videos ; 2) Provided the selection of the number of video frames (uniformly sampling 1-32 frames, more frames may time-consuming for this page without GPU acceleration); 3) Fixed some errors of V0.1.
[V0.1] 2024/12-2025/02/21: Create this page with basic detectors [DfD-Checkpoint_Fine-tuned_on_FF++, FAS-Checkpoint_Fine-tuned_on_MCIO] that follow the paper implementation.
- Please provide a facial image or video(<100s), and select the model for detection:
[SUGGEST] [✨Unified-detector_v1_Fine-tuned_on_4_classes] a (FSFM Pre-trained) ViT-B/16-224 for Both Real/Deepfake/Diffusion/Spoofing facial images&videos Detection
[DfD-Checkpoint_Fine-tuned_on_FF++] for deepfake detection, FSFM ViT-B/16-224 fine-tuned on the FF++_c23 train&val sets (4 manipulations, 32 frames per video)
[FAS-Checkpoint_Fine-tuned_on_MCIO] for face anti-spoofing, FSFM ViT-B/16-224 fine-tuned on the MCIO datasets (2 frames per video)