A
research
team
led
by
Prof.
Li
Ping,
Sin
Wai
Kin
Foundation
Professor
in
Humanities
and
Technology,
Dean
of
the
PolyU
Faculty
of
Humanities
and
Associate
Director
of
the
PolyU-Hangzhou
Technology
and
Innovation
Research
Institute,
explored
the
similarities
between
large
language
models
and
human
representations,
shedding
new
light
on
the
extent
to
which
language
alone
can
shape
the
formation
and
learning
of
complex
conceptual
knowledge.
By
exploring
the
similarities
between
LLMs
and
human
representations,
researchers
at
The
Hong
Kong
Polytechnic
University
(PolyU)
and
their
collaborators
have
shed
new
light
on
the
extent
to
which
language
alone
can
shape
the
formation
and
learning
of
complex
conceptual
knowledge.
Their
findings
also
revealed
how
the
use
of
sensory
input
for
grounding
or
embodiment
–
connecting
abstract
with
concrete
concepts
during
learning
–
affects
the
ability
of
LLMs
to
understand
complex
concepts
and
form
human-like
representations.
The
study,
in
collaboration
with
scholars
from
Ohio
State
University,
Princeton
University
and
City
University
of
New
York,
was
recently
published
in
Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets.
The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair “pasta” and “roses” might receive equally high olfactory ratings, but “pasta” is in fact more similar to “noodles” than to “roses” when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs.
The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans’ conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data.
In light of the findings, the researchers examined whether grounding would improve the LLMs’ performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations.
Prof. Li Ping said, “The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.”
Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, “The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect”.
Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, “These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM’s representation will then be indistinguishable from that of humans.”
Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI
The issuer is solely responsible for the content of this announcement.
Support InfoStride News' Credible Journalism: Only credible journalism can guarantee a fair, accountable and transparent society, including democracy and government. It involves a lot of efforts and money. We need your support. Click here to Donate