HS-PrediCt

 The HS-PrediCt Blog

Participant and Public Involvement Group

1st April 2022

Alexina Whitley and Eszter Porter

On April 1st 2022 we re-opened our doors to welcome members of our Participant and Public Involvement (PPI) group into the building for the first time since the COVID-19 pandemic began. Many teams within Hearing Sciences Scottish Section took the opportunity to discuss their work and ideas with the members. Here we would like to share what the HS-PrediCt team learnt and why it is of interest. 

As you may have guessed by the name, our team is interested in prediction and how we use it in conversation. Prediction in conversation may look like anticipating what someone is going to say before they have said it or anticipating when someone is going to start or stop talking. Making predictions during conversation facilitates understanding and allows for smoother turn-taking. People with hearing loss often report finding conversation difficult, resulting in breakdowns in the flow of conversation. Of particular challenge is following what is being said as conversations drift in topic and shift from one talker to another. It may be that people with hearing impairment are not able to benefit from using prediction in the same way that those with normal hearing do. So we met with our PPI members to hear their thoughts about how they, as individuals with hearing loss, feel they use prediction in everyday conversation.

Alex and Eszter presenting at the PPI Meeting, April 2022
Alex and Eszter addressing the meeting
 

Predicting what someone is going to say

Our members informed us that they do believe that they can anticipate what someone is going to say, but only in some situations. When we asked about what might enable them to do this more or less effectively, the major factor they unanimously highlighted was familiarity. It may not seem too surprising that the more we have conversed with someone, the more able we feel we are to predict what they might say. In fact, research has found that the more familiar we are with a person, the more able we are to make use of shared knowledge and common ground to allow for a more efficient and less effortful conversation (Doedens, Bose, Lambert & Meteyard, 2021; Zwaan, 2015). 

Another factor shared with our team that impacts the ability to successfully follow conversation, was whether a topic is suddenly changed mid-conversation. This was further commented on when our PPI members discussed that ‘when people skip a point on an agenda in a meeting I lose track of what they're saying’. This suggests that context cues, such as a topic of discussion, can also be very important for those with hearing loss when following conversation and being able to predict what someone might say.

Predicting when someone is going to stop or start talking

Next, we asked our members whether they find it difficult to anticipate when someone is going to start or stop talking, to mixed responses. Whilst some of our members believed they are able to do this with ease, others find this quite difficult. One of the major challenges they highlighted was knowing when they were expected to respond, saying they often found themselves ‘talking over the end of other people’s sentences’ as they ‘have to guess when [the other person] is finished talking’. When asked how they find keeping track of speakers taking turns in conversation, our members shared stories of difficulties they faced with social gatherings, meetings and everyone's favourite new meeting spot zoom (though, I think we all share their frustrations on this one). 

What does this mean?

From the current academic literature and talking to those with hearing loss, we can see that perhaps the way in which people use prediction in conversation might depend on their hearing ability. For people with good hearing, anticipatory mechanisms allow listeners to predict not only the final words someone is going to say, but also the moment at which that speaker is going to finish (De Ruiter et al., 2006; Magyari & de Ruiter, 2012). The ability to predict the end of a speaker’s turn allows listeners to plan their upcoming response in advance and respond in a well-timed manner. As a result, prediction makes conversational turn taking quick and efficient, with gaps between turns lasting only 200ms on average (Stivers et al., 2009).In contrast, some (but not all) of our PPI members highlighted how difficult they found it to identify when a speaker's turn was over, suggesting that hearing loss could impact their ability to reliably predict turn end cues in everyday conversation. Research shows that older hearing-impaired listeners don’t seem to benefit from prediction as much as normal hearing listeners (Benichov et al., 2012), and appear to respond to speech more slowly (Wendt et al., 2015; Sørensen et al., 2020). The responses from our representatives are consistent with this research, highlighting less engagement of anticipatory mechanisms which may contribute to the ease in which they engage in conversation.

Interestingly however, they specifically highlighted that they could ‘tell by body language whether they need to respond,’ even if they were struggling to follow what was being said. Speech in face-to-face conversations is multi-modal in nature, meaning we combine both auditory and visual information to make sense of what is being said to us (Holler & Levinson, 2019). This suggests that people with hearing loss are aware that they rely substantially on visual information, possibly due to their degraded auditory input, which may help them follow what is being said and know when and how to respond to their conversation partners (Sparrow, Lind & van Steenbrugge, 2020). Functional Magnetic Resonance Imaging (fMRI) research has shown that individuals with hearing loss show greater brain activation in the frontal regions of the brain when processing audio-visual speech when compared to those with normal hearing. This suggests that people with hearing loss rely more strongly on the audio-visual integration of speech than those without a hearing loss (Rosemann & Thiel, 2018). Therefore, the comments from our PPI members highlighting how they look to body language to help them predict in conversation, aligns with the literature which suggests a greater reliance on visual cues to help aid conversation flow and understanding. 

What next?

The HS-PrediCt team will be running a series of experiments over the coming years, aiming to better understand any potential differences in predictive mechanisms between those with a hearing impairment and those without, and how these differences might be seen in conversational behaviours.

We will be sure to keep our blog up to date with what we’ve been up to so you can follow along. If you have any questions, or you want to know more about our upcoming experiments please do get in touch via email: hs-predict@nottingham.ac.uk.

Credits:

  • Thank you to the Participant and Public Involvement panel who participated
  • Organisers: Patrick Howell, David Mcshefferty 
  • Facilitators: Alexina Whitley, Eszter Porter
  • This Participant and Public Involvement meeting was supported by the Medical Research Council (MR/S003576/1) and the Chief Scientist Office of the Scottish Government.
  • The HS-PrediCt team is funded by a UKRI Future Leaders Fellowship (MR/T041471/1)

References:

Benichov, J., Cox, L. C., Tun, P. A., & Wingfield, A. (2012). Word recognition within a linguistic context: effects of age, hearing acuity, verbal ability, and cognitive function. Ear and hearing, 33(2), 250–256. https://doi.org/10.1097/aud.0b013e31822f680f

De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker’s turn: a cognitive cornerstone of conversation. Language, 82, 515-535. doi: 10.1353/lan.2006.0130

Doedens, W., Bose, A., Lambert, L., & Meteyard, L. (2021). Face-to-Face Communication in Aphasia: The Influence of Conversation Partner Familiarity on a Collaborative Communication Task. Frontiers In Communication, 6. doi: 10.3389/fcomm.2021.574051

Holler, J., & Levinson, S. (2019). Multimodal Language Processing in Human Communication. Trends In Cognitive Sciences, 23(8), 639-652. doi: 10.1016/j.tics.2019.05.006

Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in psychology, 3, 376. doi: 10.3389/fpsyg.2012.00376

Rosemann, S., & Thiel, C. (2018). Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage, 175, 425-437. doi: 10.1016/j.neuroimage.2018.04.023

Sørensen, A. J. M., MacDonald, E. N., & Lunner, T. (2019). Timing of turn taking between normal-hearing and hearing-impaired interlocutors. In Proceedings of the International Symposium on Auditory and Audiological Research, 7, 37-44. 

Sparrow, K., Lind, C., & van Steenbrugge, W. (2020). Gesture, communication, and adult acquired hearing loss. Journal Of Communication Disorders, 87, 106030. doi: 10.1016/j.jcomdis.2020.106030

Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., ... & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences, 106(26), 10587-10592. doi: https://doi.org/10.1073/pnas.0903616106

Wendt, D., Kollmeier, B., & Brand, T. (2015). How hearing impairment affects sentence comprehension: Using eye fixations to investigate the duration of speech processing. Trends in Hearing, 19. 10.1177/2331216515584149

Zwaan, R. (2015). Situation models, mental simulations, and abstract concepts in discourse comprehension. Psychonomic Bulletin &Amp; Review, 23(4), 1028-1034. doi: 10.3758/s13423-015-0864-x

HS-PrediCt

Hearing Sciences Scottish Section
New Lister Building
Glasgow Royal Infirmary
Glasgow, G31 2ER


Telephone: +44 (0) 141 242 9678
Email: hs-predict@nottingham.ac.uk