Google is investigating the possibility that its reCAPTCHA system uses "adversarial noise" to improve its security, a technique commonly used in artificial intelligence to mislead machine learning models. Researchers have found evidence suggesting that adding noise to images can make it difficult for neural networks trained on ImageNet to recognize them correctly, while also increasing their confidence in the wrong answer. The study proposes to test this hypothesis by comparing the performance of three different neural networks on a set of images with and without "mystery noise" or Gaussian RGB noise added to clean images. If the results support the theory, it could indicate that reCAPTCHA has become more secure by incorporating adversarial examples into its system.