Scientists have developed a new AI system that can "dive into the human brain" and gain insight into what types of faces you find most attractive.
Can an AI computer system recognize the facial features we find attractive in a portrait without any voice or text commands? The researchers monitored the brain waves of 30 people with electroencephalogram (EEG) scans and then showed them 200,000 images of real celebrities stitched together in different ways to create a "fake face."
The subjects didn't have to do anything, or tap directly on a photo they liked, because the team could determine their "unconscious preferences" from eeg readings. They then fed the data into an AI system that can pick up on the brain waves and recommend what you think is attractive.
In the future, the AI technology could be used to determine testers' personal preferences or to learn about unconscious attitudes that people might not talk about publicly, including race, religion and political views. The current AI system can understand the subjective perception of what faces are most attractive.
In previous studies, scientists have designed models that can identify and control simple portrait features, such as hair color and mood changes, but determining attractiveness will now be a more challenging research topic.
In early, limited studies of human facial features, blondes and smiling beauties have generally agreed, but that's just the surface detail. By contrast, analysing which faces the testers find more attractive is a challenging task because it has to do with cultural and psychological factors that may play an unconscious role in our personal preferences.
Indeed, it's hard to explain why some people find some attractive and others unattractive, which may prove the adage that beauty is in the eye of the beholder. Initially, the researchers created an adcontest neural network (GAN) intelligent system that could create hundreds of artificial portraits, which were shown to 30 subjects one at a time. The subjects were then asked to record which of the images were attractive, while electroencephalography (EEG) recorded their brain responses.
It's a bit like dating apps. In this latest study, participants did nothing but look at the photos, measuring their immediate brain responses to the portraits, a non-verbal process, before researchers used machine learning to analyze eeg data and generate a neural network.
Brain-computer interfaces like this can explain why some of the portraits are attractive to the subjects, by reading their view analysis, which is an ARTIFICIAL intelligence model that reads the brain's responses, and by combining what certain people think of as attractive faces, a neural network model that can generate completely new images of faces.
To test the effectiveness of their modeling, they created new facial images for each participant and predicted whether they would be attractive. In a "double-blind" test, the results showed that the newly modeled images matched the participants' personal preferences more than 80 percent of the time. Double-blind testing, also known as double-blind testing, is a process in which neither the tester nor the subject knows which group the subject belongs to (experimental group or control group), and the analyst usually does not know which group the data being analyzed belongs to.
This study shows that we can connect artificial neural networks to brain responses to generate images that match our preferences and preferences. Success in assessing attractiveness is significant because it is a profound psychological trait in response to external stimuli. So far, computer vision has been very successful in categorizing images based on goal patterns, but by introducing the brain's response to mixed situations, we have shown that it is possible to detect someone's aesthetic standards and generate "beauty images" based on psychological attributes, such as personal taste.
The latest technology may be unconscious attitude, exposed to those who are not able to be recorded, consciously express the information of target test, finally through the artificial intelligence solution and interaction brain-computer interface technology, the study could improve the ability of computer learning, and more and more understanding of people's subjective consciousness preference, to benefit society.
If individual subjective factors like aesthetic preferences could be analyzed with insight, we might be able to study other cognitive abilities like perception and decision-making, and we might be able to adjust cognitive strategies or implicit biases to better understand individual differences.
How does adversarial networking work
By pitting the two algorithms against each other to generate adversarial networks, scientists are trying to create real subjective preferences for humans.
These "imaginary" digital creation, can be images, video, audio, and other forms of content, mainly based on the data input system, artificial intelligence, machine system based on what they have learned to create new content, and the other is an artificial intelligence system for these Numbers in criticizing, points out the defects and inaccuracies.
Eventually, the process could allow ai systems to learn more new information without any human input.
How does artificial intelligence learn through neural networks
Ai systems are based primarily on artificial neural networks (ANNs), which attempt to learn by mimics the way the human brain works, and which can be trained to recognize patterns in information -- whether speech, text data or visual images -- that have been the basis of ai development in recent years.
Traditional AI uses reams of information to "teach" algorithms about specific targets, including Google's language translation service, Facebook's facial recognition software and Snapchat's real-time image filters.
The process of inputting this data can be time-consuming and limited to one type of knowledge, and a new type of artificial neural network, called adversarial neural network, pits two AI systems against each other, allowing them to learn from each other. The method is designed to speed up the learning process and optimize ai output "digital authoring."