Our confidence in those with whom we communicate may be compromised as AI becomes more realistic. The impact that advanced AI systems have on our trust in others has been the subject of research at the University of Gothenburg.

In one scenario, a potential con artist is connected to a computer system that communicates through pre-recorded loops and thinks he is calling an elderly man. The trickster invests impressive energy endeavoring the misrepresentation, calmly paying attention to the “man’s” fairly confounding and monotonous stories. Oskar Lindwall, a communication professor at the University of Gothenburg, says that people often don’t realize they’re interacting with a technical system for a long time.

In collaboration with Professor Jonas Ivarsson of Informatics, he has written the article Suspicious Minds: The Issue of Trust and Conversational Specialists, investigating how people decipher and connect with circumstances where one of the gatherings may be a man-made intelligence specialist. The article emphasizes the detrimental effects of having suspicions about other people, such as the harm it can do to relationships.

Ivarsson gives an example of a romantic relationship in which trust issues cause jealousy and a greater propensity to look for deception. The creators contend that being not able to completely believe a conversational accomplice’s expectations and personality might bring about extreme doubt in any event, when there is not an obvious explanation for it.

During interactions between two humans, some behaviors were interpreted as indications that one of them was a robot, according to their study.

The researchers suggest that the development of AI with more human-like characteristics is being driven by a pervasive design perspective. This might be appealing in some situations, but it can also be problematic, especially if you don’t know who you’re talking to. Because these human-like voices evoke a sense of intimacy and lead people to form impressions based solely on the voice, Ivarsson wonders if AI should have voices like these.

According to Lindwall and Ivarsson, the scam is only discovered after a considerable amount of time in the case of the would-be con artist calling the “older man” due to the assumption that the confused behavior is caused by age and the believable human voice. We infer attributes like gender, age, and socioeconomic background once an AI has a voice, making it harder to tell that we are interacting with a computer.

The analysts propose making man-made intelligence with well-working and smooth voices that are still obviously engineered, expanding straightforwardness.

Not only does deception play a role in communication with others, but so does the development of relationships and the creation of shared meaning. This aspect of communication is impacted by the ambiguity of whether one is conversing with a person or a computer. It might not matter in some situations, like cognitive-behavioral therapy, but it might affect other kinds of therapy that require more human contact.

Oskar Lindwall and Jonas Ivarsson examined YouTube-provided data. They looked at audience responses and comments as well as three different kinds of conversations. In the first type, a robot calls to schedule a hair appointment without the person on the other end realizing it. A person calls another person for the same reason in the second type. The third kind involves transferring telemarketers to a computer system with pre-recorded speech.