The following 3 tutorials will be held jointly with AVI 2024.

A critical Assessment of ChatGPT and AI for Rethinking Learning, Working, and Collaborating in the Digital Age

date: June 3 afternoon

Beyond the AI “Hype”. 2023 was the year of “AI hype” based on ChatGPT as the most prominent example of large language models and generative AI. For some, ChatGPT offers exciting possibilities for exploration, clarification, learning, and practice — for others, it suffers from inaccuracy of information, reduced critical thinking, and overreliance as people might accept AI-generated answers without question. However, like all tools, its utility is determined by how it is used. Different usage scenarios grounded in an analysis of design trade-offs between them will be explored in the tutorial.

The tutorial will provide the seeds for exploring, discussing, and assessing components of a future agenda for the AVI Research Community, including:

Organization. The tutorial will present and critically examine the abovementioned themes and explore an agenda for future research activities and developments.

The main focus of the tutorial will be to allow all participants to engage as active contributors rather than remaining passive recipients by encouraging them to contribute their ideas and experiences and engage them in discussion specifically about controversial issues (following a “flipped classroom” approach)

Timeline: 4 hours total consisting of 3 sections of 1-hour duration and 3 breaks of 20 minutes duration


Gerhard Fischer, University of Colorado  (Boulder, USA)

Captioning Visualizations with Large Language Models (CVLLM)

date: June 4 afternoon

Automatically captioning visualizations is not new, but recent advances in large language models (LLMs) open exciting new possibilities. We will provide an introduction to LLMs and discuss ongoing efforts in this area, applications and implications of recent work, and promising future directions. 

It is well-established that visualizations have advantages over text-based representations for a number of analysis tasks, since they more fully leverage our innate visual processing capabilities. However, it has also been found that visualizations can be well-supported by textual augmentations such as captions. Further, recent advances such as large language models have resulted in their incorporation into an unprecedented number of domains. Our goal is thus to provide attendees with a grounding in large language models and their applicability to complex visualizations in order to provide intelligent functionality such as captioning and prompts.

Recommended background knowledge:


Giuseppe Carenini, University of British Columbia (Canada)
Jordon Johnson, University of British Columbia (Canada)

Techniques for Notation Design - Swings and Roundabouts

date: June 7 morning

With the growth of low-code/no-code “solutions”, highly configurable devices, and greater user empowerment with advanced functionality, great potential is placed on users being able to work effectively with often proprietary notations. The tutorial with develop the concept of notation design and the human factors that determine the strengthens and weaknesses of specific notations. This will be explored and explained using a range of examples and a core reference model. 

The tutorial will introduce a framework for articulating the space of notation designs and explore human factors that influence effective notation design. By participating, you will:

This tutorial will be ideal for early career researchers encountering the need for considering interaction design involving notations, industry-based developers facing the challenge of empowering users of complex systems and mature researchers interested in alternative evaluation perspectives.




Chris Roast, Sheffield Hallam University (UK)