MaskAnyone in Action
See how SYNAPSIS transforms identifiable audiovisual data into privacy-preserving research material while maintaining analytical value.
Face Masking Demonstration
Compare original footage with de-identified output
Sample Recording - Face Swap Method
Classroom Recording - Selective Masking
More Demos Coming Soon
Voice anonymization, pose extraction, and batch processing examples
Request specific demoAvailable Masking Techniques
Pixelation
Block-based face obscuring. Fast, simple, widely understood.
Blur
Gaussian blur over facial regions. Adjustable intensity.
Face Swap
Replace with synthetic face preserving expressions and gaze.
Skeleton Only
Extract pose data, render skeleton visualization only.
Pose & Kinematic Data
Beyond masking, SYNAPSIS extracts skeletal pose data from videos - enabling gesture analysis, movement studies, and behavioral research without exposing identity.
- 33 body keypoints (MediaPipe) or 25 (OpenPose)
- Hand landmarks (21 points per hand)
- Export to JSON, CSV for analysis in R/Python
- Blendshape extraction for facial expression analysis
{
"frame": 42,
"timestamp": 1.4,
"pose": {
"nose": [0.52, 0.31, 0.98],
"left_shoulder": [0.61, 0.48, 0.95],
"right_shoulder": [0.43, 0.47, 0.96],
"left_wrist": [0.71, 0.62, 0.89],
"right_wrist": [0.33, 0.58, 0.91]
// ... 33 keypoints total
},
"hands": {
"left": [...],
"right": [...]
}
}
MaskAnyone Demo
MaskAnyone is deployed on Radboud University infrastructure. Access is currently restricted to project partners and pilot participants.
Hosted on Radboud Ponyland cluster. Contact us to arrange a guided demo session.
Want to Learn More?
Explore our training materials or get in touch to discuss deploying MaskAnyone at your institution.