Main topics of the workshop
Assessing the performance of LLMs for analyzing HSM
The workshop seeks to stimulate methodological explorations of whether and how LLMs may enhance
the performance of NLP tasks that are challenging on HSM and how LLMs can assist qualitative analysis
on HSM. We are especially interested in empirical assessments of LLMs on real-world HSM and practical
recommendations for health informatics researchers to apply and customize LLMs for different study
contexts. For example, can LLMs produce accurate aspect-level sentiment labels and meaningful topic
clusters? Can LLMs help qualitative researchers conduct grounded-theory analysis and thematic analysis?
Can LLMs reliably identify misinformation from health-related social media?
Identifying novel areas of research enabled by LLMs on HSM
LLMs may also enable applications that were previously underexplored when analyzing HSM. For
example, the recent GPT models enable multi-modal analysis which involves the integration of text,
images and possibly audio and even video data. This capability can greatly enhance the utilization of
HSM beyond textual format, allowing for a more comprehensive understanding of the health content on
social media. This workshop is interested in gathering cutting edge explorations of novel applications
using LLMs on health-related social media by broadening analysis modality and multi-task analysis such
as classifying aspects, sentiments, and intents simultaneously without having to develop individual
models.
Discussing challenges and issues brought by LLMs on research using HSM
Despite the promises of LLMs, there are also increasing concerns around data security, biases and
disparities, research replicability, and ethical issues in using LLMs. This is particularly important for the
health domain, due to the sensitive nature of health conditions and severe consequences of leaking
personal health information. This workshop will invite researchers from different backgrounds to discuss
the challenges brought by LLMs to analyzing HSM, especially how patients and the public may perceive
these issues and how the research community and policy makers can design new guidelines in response to
these emerging issues. We will also invite consumer health researchers and community-based researchers
to share perspectives from patients and the public regarding the use of HSM in LLM research, and how
research communities can best protect patient safety while promoting the benefits of HSM-based research
using LLMs.
Call for papers and reviewing process
We welcome submissions of extended abstracts (4 pages maximum with references) that are related to the
three main topics listed in this proposal. The types of submissions could be late-breaking work, literature
reviews, opinion pieces, and case studies that describe cutting-edge applications and research in using
LLMs on HSM. The workshop organizers will rigorously review the submissions based on their relevance
to the workshop topics, research quality, impact, and novelty, and interests to the intended audience of the
workshop. Accepted submissions will be invited to present at the workshop to simulate discussion.
Submission instruction
Please use the AMIA podium abstract template to format your submission. To submit, please send your extended abstract to he32 at uwm dot edu before 10/11, 11:59pm PST. We will send notifications to authors on 10/18. For any questions related to submission or the workshop in general, please directly contact Dr. Lu He (he32 at uwm dot edu)