Researchers from MIT, Weizmann Institute, and Adobe have developed a new perceptual similarity metric called DreamSim that bridges the gap between human and machine perceptual similarity. The team created a novel benchmark dataset called NIGHTS (Novel Image Generations with Human-Tested Similarity) to train DreamSim. Unlike previous metrics, DreamSim captures both low-level features such as texture and color, as well as high-level semantic information. This allows for more accurate representation of human perceptual similarity judgments. The development of DreamSim is a significant step forward in the field of computer vision and artificial intelligence, enabling machines to better understand and interpret visual content in ways that align with human perception.