Planning communication access for online conferences

Lauren Gawne and Gabrielle Hodge

The corner of one of the breakout rooms at LingComm21. Located in a pixel art computer graphic proximity-based chat and interaction interface known as Gather. The captioner has a dedicated chair marked off by velvet ropes. One conference table says 'text only'. Lauren Gawne's avatar stands in the space. Deep purple carpet and weirdly oversized conference tables.

There’s a lot to like about online events. Travel, and the associated cost, are no longer a barrier to participation, making it easier for a wider range of participants to engage with conferences. This has made online events more accessible than physical events, but online events still need a thoughtful approach to communication access. In this post we provide a brief introduction to making conferences accessible to deaf and hard of hearing participants, and our experiences of this as a participant (Gabrielle Hodge) and an organiser (Lauren Gawne) of the 2021 International Conference on Linguistics Communication (aka LingComm21). Making communication access an intentional part of academic event design benefits all participants, yet it does require event organisers to do the work.  

Communication access is about ensuring people can bring their best to the event and for everyone to engage in all directions. We want our academic communities to reflect the same variation in lived experience and expertise as the rest of our lives. Planning for communication access should be the same as planning physical access or catering: you don’t wait until people turn up and tell you they’re hungry to plan catering for an event. Communication access should be built into every event, much like making sure accessible toilets are available, that everyone can get into the building and use facilities with ease and that there’s a range of food, not just egg sandwiches. Here are some common and easy-to-implement communication access options for you to engage with your deaf and hard of hearing colleagues.

Captions are a written representation of the spoken content that appears simultaneously on a screen. Closed captions provide people with a choice to display them or not, while open captions are always running and fixed on the screen for everyone. Captioning can be very effective for pre-recorded video content, or a verbatim transcript can be provided for audio-only content such as podcasts. Captions need to be created by a human to provide access for conference participation. 

Automatic captions are generated using AI, and can be more or less accurate depending on the language, the speaker’s accent and frequency of technical vocabulary. There is a reason we often call them “craptions”. A common assumption is that automatic captions automatically provide communication access for deaf and hard of hearing people. They do not. The automatic caption function should never be switched on and assumed to be providing access: they need to be checked and corrected by a human to ensure they are accurate. You should only make use of automatic caption functions at conferences if you are preparing a pre-recorded resource and are prepared to correct the transcript and time alignment of the captions yourself. 

A 4 image square meme from Star Wars. First panel: A young Anakin Skywalker, text “i’ve added subtitles to make my content accessible. Second panel: Padme Amadala, smiling, text: “but not automatically generated ones, right?”. Third panel: Young Anakin looking serious, no text. Fourth panel: Padme looking slightly concerned, text: “but not or tomato call he gem her eight it ones, write?”.
Meme via Tony B (@Saltbar)

Live captions are human-generated by a specialist stenographer (best accuracy possible) or through re-speaking technology (more prone to error) and occur with speech in near-real time. Professional live captions are much more accurate than automatic captions. At a physical event these can be projected onto a screen or device. At digital events, depending on the workflow, they can be displayed in Zoom, a YouTube livestream, or on a separate webpage. Professional and accurate live captions are generally easy to read and understand. They provide quality access to the spoken language content of the event.

Sign language interpreting at academic events involves qualified sign language interpreters who simultaneously interpret between a spoken and a signed language. For example, between spoken English and BSL (British Sign Language) or between Libras and Brazilian Portuguese. This is an incredibly complex job requiring exceptional proficiency in the working languages, and you’ll often see interpreters swapping out repeatedly over an hour-long event. Interpreters may be deaf or hearing. They might work between a spoken and signed language, or between two signed languages. At some events, there may even be a few different sign language interpreting teams all working between their languages as one big team. Sign language interpreting requires a lot of preparation and therefore must be organised months in advance: ideally as soon as you know the dates and times. One week before the conference is too late. Online conferences are radically changing the working conditions of interpreters (not necessarily for the better), and re-igniting debates in different countries about what signed languages are available at international conferences, and who is responsible for carrying the cost. These discussions might not be happening in your country or research community, but if you’re organising an international event it can be good to know they might be happening in other places.

There are other communication access options that you might also need to consider depending on the event community, such as audio descriptions for blind people. We’ve described the ones above because they’re kind of like vegetarian catering at an event: that is, there is always some background percentage of society that will require vegetarian food as a matter of priority, but vegetarian food options also cater to a range of other participants (in the case of food, this might be people who keep kosher, have pregnancy-based food aversions or just don’t like eggs). In the case of captions, many other people can make use of them: people who have auditory processing issues, people who are listening to a language that is not their first language or who prefer to read, and event organisers who might like a written record of the event. However, these are not the only options for communication access, and many events will benefit from a combination of several options. You should always make space for people to disclose these needs, and build them in as needed. At the same time, if such people are ongoing participants in your community (or are even the subject of study, in the case of sign language linguistics) then they must really become a standard feature of events full stop. It’s the same if a member of your community regularly attends and requires sign language interpreters. 

Of course, it’s rare to go to a conference where everyone is thrilled with the catering, and you won’t always accommodate everyone with the communication access you provide either. The important thing to remember as an organiser is that this is a limitation of the event resources, and not a limitation on behalf of the participants. In the case of LingComm21, we made it clear to everyone from the outset that the registration fees were directly funding live captions for some of the conference, i.e., all of the plenary events and 1-2 of the streamed sessions. We chose to begin with live captioning because we had participants from across the English-speaking world and didn’t want to privilege one sign language from the outset. We also had a clear section on the registration form for other access requests in case someone did want specific sign language interpreting, so that we could work together to arrange it. We ensured our conference platform had technical capacity to include interpreter video feeds if someone wanted to bring their own interpreters, rather than use the captions. Afterwards, many attendees acknowledged that live captions were important, even if they didn’t directly benefit from them. They also said they were willing to pay more in registration in future years to cover these costs. Funding for communication access can be a major budget element, but one of the most important things we can do is normalising it as part of budgeting for your event. 

These guidelines for integrating captions and signed language interpreters from Julia Miller and Gabrielle Hodge at the Centre of Excellence for the Dynamics of Language are a good place to start familiarising yourself with these topics in more detail.

With all of these considerations, we want to discuss how communication access for deaf and hard of hearing people played out at LingComm21 in April 2021. In this section we’ll begin by sharing our individual experiences as organiser and participant in this event, before reflecting on these experiences together. 

Event organiser (Lauren Gawne)

We set out with the intention of designing LingComm21 as an online event that brought the same social energy as a physical event. We held the event in Gather, which is a proximity-based social video platform. This allowed us to have participants move about and use text chat as well as audio/video chat. We had some areas that were specifically flagged as text-only, which was useful for a variety of people who didn’t use audio, including people with low bandwidth internet, and participants who were bravely joining us from Europe at 2am and were in quiet houses. 

As a new platform, Gather still has some way to go to make caption integration smoother. For large sessions we were streaming into Gather from YouTube, but this workflow added enough variables to mean that not everyone was seeing the captions alongside the videos. Videos in Gather are also small unless you click to expand, and when people turn their video off you need to prompt them to turn it back on, which can make it hard to facilitate signed chat. 

We planned live captions rather than interpreting because of the international scope of the audience. The captioning company we worked with were incredibly professional and produced incredibly high-quality transcripts that exceed anything that an auto-captioning program would have managed with our diverse accents and technical vocabularies. As this was the first time we ran this conference and used this workflow, we had to adjust things over the days. This included discovering that while we had tested the transcripts came through to YouTube, and that the YouTube stream came through to the conference, we had failed to test how the closed captions came through in the stream to the conference. It turned out they did not. Thankfully it was a small enough conference we could work directly with affected participants, but it was still mortifying to add additional friction to the event for people. In those moments my main concern was ensuring we got the technology working and not burdening participants with excessive apologies or unnecessary explanations or excuses when they just wanted to get on with the conference (and instead saved the histrionics for my event co-chairs later in the day).

Event participant (Gabrielle Hodge) 

Going into the conference, I was excited because I knew the organisers were conscious about communication access right from the beginning. I loved how it was promoted in terms of communication access and how different people responded to that on social media. The registration form was a dream: it was clear the organisers cared about access in general, that I could ask for different options, and they would work actively to make it happen. It did not feel like the usual event registration form tick box, which is a tiny box down the bottom asking if any additional needs were to be accommodated. I have never received a response from one of these boxes. Organisers either ignore it or address it far too late, like the day before the event begins. So it was great to see this aspect of conferencing addressed properly within the actual registration form. Overall, I found that knowing a lot of effort had gone into planning and including communication access at LingComm21 meant that I was way more invested in the conference. I also felt more understanding and forgiving about how different aspects of the conference played out. 

This event was my first time in Gather, after months of seeing people either rave or complain about this platform. The LingComm21 Gather town was amazing: it looked like a real conference venue, played like an eighties throwback, and even had a couple of ducks hiding in various places that we had to search and find. It was very clearly designed by internet and scicomm nerds, and you wouldn’t want it any other way. A couple of familiar conferencing moments: feeling lonely standing alone by my poster and being rescued with a warm welcome from one of the hosts; a random but interesting conversation with someone working on a completely different topic, where we spent most of the time trying to understand how we each use one specific word; finding my way (late!) to the first plenary talk and seeing a big name rush past in apparently the same flustered state (also late!); the social anxiety of walking into a big room with conference tables and people already sitting down at them (people! sitting down at tables!); and the little golden roped off section near the main stage reserved for the captioners (very sweet and nice touch). 

Not all of the LingComm events were captioned, but there was a range of topics and talks that were available with captions. I believe the organisers allocated the captioning schedule depending on which topics people needing captions had indicated they were interested in attending on the registration form. On entering the Gather space, there were text-only spaces clearly marked out, but there were also fewer people already sitting at these places. Most people were in the regular video space and speaking face-to-face with each other. I did notice people making a beeline for these video spaces, but possibly they already knew the people sitting there and/or wanted to speak with them. When the presenters began, I was ready and waiting for captions to show on the video, but they did not appear. Apparently there was a technical glitch affecting how the captions were streamed, so you could not actually watch the speaker video in GatherTown with the captions. There were two options: (1) watch the speaker live on YouTube with live captions, or (2) read the captions as page text via a separate link. I initially tried to watch them on YouTube, but the timing between speaker and live captions appearing on screen was too delayed, choppy and frustrating to read, so I switched to the page text. This was not ideal, as it is difficult to monitor two different browsers (the Gather and the caption page text), and the Gather browser also had different chat boxes popping up. However, the benefit of reading captions as a page text was that it was like reading a book, easier to skim and absorb quickly. I could then jump back to the main Gather browser and catch up on those chat boxes, while waiting for the next page of captions to load. 

I also attended a couple of workshops where we all watched a brief presentation, such as how to write and pitch a science communication idea to mainstream media outlets, and then talked about how to action our pitch ideas. Everyone in my group was so friendly and thoughtful. We were able to have a few different conversations going on at the same time: one group conversation plus any private direct messaging with different individuals. The speaker expert also came around and gave us concise and helpful feedback on our pitch ideas. Overall, these smaller interactions resulted in a very positive conference experience and I think they are a valuable part of the online conferencing experience. There was no need to think or worry about access, we just communicated via text with videos on and it seemed everyone had a good time. This aspect of conferencing was probably easier than if we were all there in person, where we would need to pull out pen and paper or tech to communicate, plus those who are hearing would have to contend with background noise and so on. I also met a few different researchers who have taken their research skills and interests to the workforce outside academia and that was inspiring too. 

Event debrief (Lauren and Gab)

Our experience with the technological limitations of live captions at LingComm21 illustrates the challenges of communication access not going right. While the organisation and planning of live captions at LingComm21 were very much within the control of the conference organisers, we failed to anticipate the issue with caption integration, and lacked the ability to find a frictionless solution within the limits of Gather. Above we highlighted how we both immediately dealt with the situation. It was relatively painless compared to the online conferencing experiences that deaf and hard of hearing researchers have most often experienced and discussed online during the pandemic and earlier. LingComm21 was small enough that the organisers had a sense of who was making the most use of the captions. Lauren and Gab also have a long enough professional relationship for Gab to trust that Lauren was earnestly trying to work within the constraints of the tech. 

Unfortunately, there’s a commonly observed pattern at conferences where organisational and/or logistical issues lead to people missing out or even being denied communication access: regardless of whether deaf and hard of hearing people complain privately or publicly, the response from organisers is usually to dismiss the seriousness of the issue and/or respond defensively. When things go wrong with communication access and people are excluded, it’s common nowadays for people to complain on Twitter. This is often extremely effective for the person who has been excluded. It can also be very unpleasant for the organisers and they often feel attacked. However, it is important to recognise that such public complaints are not a scream into the void; rather, they result from the “democratisation of disagreement”. They are a call to action. Quick action. The kind of speed we do not get when we ask politely in private.

Not all people who are excluded due to lack of access consideration will feel comfortable communicating these issues, which is why building space into the planning and registration phase is so important. Minoritised people are used to being celebrated when it’s easy, and told to know their place when they try to challenge things (Erika Stallings refers to this as the ‘pet to threat’ trajectory for Black women in professional contexts). Deaf people are often minoritised in academic contexts, but deaf signing communities are diverse and each individual brings different experiences and needs. Access — and the experience of exclusion — is felt deeply and personally. Emotions can run high, even in well-negotiated situations, as we’ve highlighted above. The person you are excluding is not responsible for managing your feelings when your planning excludes them. This article on the deaf experience of “hearing arrogance” and exclusionary academia captures the frustration many feel when trying to engage with others at academic events. 

The appropriate response to these complaints is humility and action. Anything that even slightly smacks of defensiveness and the need to save face in public, e.g., “why didn’t you first contact us privately so we could resolve this?” is never going to go down well. Such reactions equate to blaming deaf and hard of hearing people for your own failure to ask people what they need way in advance, especially when it is an event where better awareness should be expected. One recent example was an online conference that included presentations about deaf people and sign languages, all presented by hearing non-signers. Their presentations had captions, but the live Q&A sessions afterwards did not have captions or sign language interpreters. This meant deaf signers were locked out of public discussions about our languages and our lives. The only people who could participate were hearing, English-speaking people, most of whom were non-signers. This example illustrates how many conference organisers work against a very important principle: nothing about us without us. We’ve focussed on accessibility for deaf conference participants here, but there are many ways that conference organisers can make events more accessible to more participants. 

When things go good:

It would be really handy if we could just point you to a single check list or set of resources for communication access, but it doesn’t work that way. No two groups of people are the same, needs constantly change and technology changes too (for good and bad). Access is an intersectional activity, and making a conference more physically or linguistically accessible might then require a subsequent rethink of how to make it financially accessible. Thinking of accessibility as an ongoing learning experience can ensure we also do not get complacent about the work that needs to be done. Accessibility requires event organisers to listen, from the planning stages through to feedback, and to take action on what they have learnt. We don’t deny that a good conference takes work for organisers, but people whose access needs are forgotten or ignored are constantly doing the work to even make it in the door. You might only organise a handful of major conferences during your academic career, but some people have to do this work for every single event they want to attend. Even when the conference takes accessibility needs into account, there’s still monitoring and feedback to be done. Even a positive response to a conference can come with lots of additional work. You might have attended a perfectly adequate conference and then found yourself writing a lengthy blog post about communication access at online conferences… 

In this post we’ve tried to illustrate what communication access at an online conference looks like from the perspective of a conference organiser and a participant. This isn’t an ‘us’ and ‘them’ situation. We are, at different times, organisers and attendees. For some of us, our access needs are implicitly considered in standard conference design. For the rest of us, conferences can be a place of additional labour just to attend. Accessibility starts with mutual respect and learning to listen so we can all engage. Accessibility is an ongoing project that requires ongoing conversations, willingness to try new things, and understanding that what might be an occasional consideration for you is an ongoing barrier for someone else. 


Acknowledgments 

Thanks to Maartje de Meulder, Indie Beedie and Gretchen McCulloch for feedback on earlier versions of this post. Thanks to the LingComm21 team and participants for such a great online conference. 

Actions for social justice

Online conferencing

This post is part of a 6 part series called LingComm21: a case study in making online conferences more social

  1. Why virtual conferences are antisocial (but they don’t have to be)
  2. Designing online conferences for building community
  3. Scheduling online conferences for building community
  4. Hosting online conferences for building community
  5. Budgeting online conferences or events
  6. Planning accessible online conferences

Gabrielle Hodge

Dr. Gabrielle Hodge is a deaf researcher specialising in the linguistics of signed languages. Her work focuses on the semiotics of face-to-face interaction and the sociology of language use.

She tweets from @gab_hodge, and her ORCID is: 0000-0001-8677-6149.

Lauren Gawne

Dr. Lauren Gawne is a Senior Lecturer in the Department of Languages and Linguistics at La Trobe University. Her research focuses on the use of gesture and grammar in interaction. She is one half of Lingthusiasm, a podcast that is enthusiastic about linguistics, with Gretchen McCulloch, and runs the blog Superlinguo.

She tweets from @superlinguo, and her ORCID is: 0000-0003-4930-4673.

One comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.