Trust in the laboratory can be related to confidence. It has been known for some time that confident testimony has greater influence, especially when it comes from people that also calibrate their confidence to their probability of being correct (Tenney et al., 2007). As such, you can examine trust in the laboratory by looking at how people take the advice of advisor that vary in how confident their advice is (see some cool new work by Yeung and Shea). It is apparently even possible to create computer models of trust, which update trust in an opponent on the basis of previous experiences (Juvinaet al., 2015). One interesting context in which these models were used were in peer-assisted learning of paired associates, in which your partner can inform your answers to the paired associates. In a slightly less cognitive lab setting, trust can be assessed by looking at people's facial expressions as they perform a task collaboratively (Social BART task). Even more, humans can extract trust from body odors, although this effect is modulated by gender. Extraction of social information from smell is also disrupted in people suffering from autism spectrum disorder.
Another dimension of trust occurs in teams of humans and robots collaborating. Antonio Chella thinks about whether recovery of trust can occur when we let a robot say "sorry". You can also look at how humans trust automation (e.g., in a factory) and look how often they notice failures of this automation, such as in the AF-MATB task. Apparently errors by the automated system can even elicit an error-related negativity ("oERN"). As the machine/factory makes more errors, people evidently trust it less. So in fact the reliability of one artificial agent affects how reliable we think another agent is: trust calibration.
On the other hand, do humans consider machines in the same ways as other humans? Jonathan Gratch looks at what aspects of robot behavior make us treat the robots like humans vs machines. Appararently the relevant dimensions are a sense of agency and displays of emotion--together he calls that mind perception. Humans treat robots unfairly and exhibit different emotions when they feel they are just machines. When you add emotions to the robot, people start to treat it more human-like. Apparently you can even decode from human brain activity whether people think they are dealing with humans versus machines. Also gaze is an important cue that humans use to decide whether to trust a robot. Angelo Cangelosi uses investment games to study how much people trust robots, and observed that people invest more in nice than in nasty naos. Amazingly enough, even rats prefer helpful robots over non-helpful robots! Also team interactions can be modelled with ACT-R, as Chris Myers' work on synthetic team mates shows.
Slightly less related to trust, but more to influence was work from Matt Lieberman, who showed that activity in the mPFC could predict behavior change in many contexts such as smoking cessation, wearing suncreen and more. Now what happens between two people are they are succesfully influenced? In experiments at Mount Jordan, Matt Lieberman showed that people's brains are more synchronized when they are watching a video together and are engaged and share a common reality. Also synchrony in speech (speech entrainment) can create social connectedness, because it is associated with increased positive feelings. However, this is not a simple phenomenon, because apparently it's not just more entrainment is better; rather, more variation in entrainment is better. The amount of speech entrainment seems to even affect whether people take advice from an avatar, although that is again a messy process. Less biological ways to measure connectedness include a questionnaire of social presence, which Kerstin Daubenhahn found to be sensitive to whether robots synchronized to the interaction with humans or not.
Other very interesting work by Clara Pretus looked at what is different in the brains of people who are wlling to fight and die for sacred values compared to people who don't. The main difference seemed to be less reliance on the dorsolateral prefrontal cortex for making these kinds of decisions. On a more positive note, very interesting work by Daniel Fessler showed how watching brief videos of prosocial behavior promotes real-world prosocial behavior (donations). The emotion of elevation appeared to be driving this real-world behavior. An important determinant in video content appeared to be reciprocation between the actors. Other happy news is a study by Adam Cohen who showed that when you ask people what kind of fictitious characters they would friend on Facebook, they trust Muslims and Christians equally, and the people they find most trustworthy are those who engage in costly religious practices (such as adhering to a kosher diet).
On a larger scale influence can be measured on twitter. People such as Vlad Barash have been developing network methods to study social contagion on this social media platform. Tim Weninger showed that social rating systems have a huge influence on how much other people like images/posts: to the extent that people are very poor at predicting what image will be more popular on social media, and popularity ratings are driven primarily by other users' ratings. In short, trust and influence are highly complex topics, on which very multidisciplinary research is done from many angles and perspectives.
Some useful tools I learnt about:
- facial analysis in OpenFace
- voice analysis with PRAAT
- Leanne Hirshfield has code for classifying brain data
- Online Experiment multitasking between N-back task and monnitoring an autonomic system
No comments:
Post a Comment