How can we use bots to impede the malicious goals of other bots?
While we marvel at experiential opportunities offered by the latest technologies —ambient surveillance has become our reality. Consumer cameras come equipped with facial recognition software, automatically tagging users and uploading to social media. With each click and tap we freely distribute our personal metadata throughout the internet —without question and little discretion— to be granted access to the latest platforms and services. Deepfake videos manipulations are increasingly normative, disseminated virally for entertainment.
Inspired by our Orwellian present, my team and I wanted to create a service that could be used as the first line of defence against the loss of our most oft-accessed biometric data. We fashioned a Twitter bot which processes images that are tweeted at it, applying stylized filters to the detected faces in order to obfuscate key features from similar bots with malintentions.
The processed images act as captchas: they remain readable to humans, but prevent other machine vision algorithms from recognizing the face in the image.
Though the server is not longer active, you can find the original Twitter handle here.
Camoufleur Bot is based on Rachel White’s Smarter & Cuter Bots.