Researches have investigated user engagement with a robot companion. Literature present different models regarding this concern. An example is a framework that models both causes and effects of engagement: features related to the user's non-verbal behaviour, the task and the companion's affective reactions to predict children's level of engagement. Many people are uneasy about interacting socially with a robot and, in general, people tend to prefer smaller robots to large humanoid robots. They also prefer robots to do tasks like cleaning the house rather than providing companionship. In verbal social interactions people tend to share less information with robots than with humans. Despite initial reluctance to interact with social robots, exposure to a social robot may decrease uncertainty and increase willingness to interact with the robot, and research shows that over time people speak for a longer time and share more information in their disclosures to a social robot. If people have an interaction with a social robot that is seen as playful (as opposed to focused on completing a task or being social) they may be more likely to engage with the robot in the future.
Symbolic Behaviour and Social Perception As
AI agents and robotic systems perform actions—such as gestures, facial expressions, or formalized outputs—that observers interpret as symbolically meaningful. These behaviours, while typically generated through programmed instructions or statistical models, can resemble human social conventions and are often perceived as conveying intent or emotion. Studies in human-robot interaction (HRI) suggest that anthropomorphic design features and symbolic cues can influence users' interpretations, potentially increasing engagement, perceived trust, and emotional responsiveness during interaction. Research in
social cognition indicates that people attribute meaning or emotional states to artificial agents, particularly in contexts where nonverbal cues, such as eye contact or nodding, are present.
Contextual framing and situated social experience Recent research in human-robot interaction has proposed
nonessentialist perspectives on how robots come to be perceived as social actors. According to Kaptelinin and Dalli (2025), the "sociality" of robots does not arise from inherent design features or fixed human tendencies, but from how people experience robots as part of meaningful collaborative contexts as a whole. They term this process
contextual framing. This approach emphasizes that the perception of robots as social partners is augmented and depends on factors such as shared goals, collaboration, space/environment, and personal significance rather than on designed appearance and behaviours alone. While Kaptelinin and Dalli (2025) reject essentialist views that attribute perceptions of sociality solely to robot design, they acknowledge that design choices (such as anthropomorphic form, communicative abilities, behavioral cues etc.) can influence how interaction contexts develop and are a part of a greater whole that shapes the social meaning of an encounter. Alongside things like designed aesthetics, designed behaviors and programmed/pre-planned actions, dimensions such as
contextual importance,
contextual agency,
collaboration, and
personal impact of the robot (in relation to the human) must be considered to gain a fuller understanding of the entire contextual framing of a humans social experience with a robot. Design features, and whether or not a robot is perceived as social, is mediated by the context in which the interaction arises (though, design features themselves can contribute, in part, to that context). ==Societal impacts==