In 1966, Professor Joseph Weizenbaum from the Massachusetts Institute of Technology (MIT) created Eliza, the first chatbot. Eliza’s initial aim was to test the machine’s capacity to comprehend and process language, yet its use soon opened the door to a much broader discussion: how humans interact with machines. Despite Weizenbaum explaining that Eliza was only imitating the role of a therapist, users began to perceive the chatbot as capable of understanding them. Decades later, with advances in artificial intelligence (AI) and the emergence of chatbots such as ChatGPT and Gemini, the relationship between humans and machines appears to have grown even closer. For instance, over a third of the United Kingdom is using AI for emotional support or social interaction, according to research published at the end of 2025. But can AI chatbots offer trustworthy psychological help?To explore this question, Euronews Tech Talks spoke with Charlotte Blease, a philosopher and healthcare researcher, and Tom Van Daele, clinical psychologist and research coordinator in psychology and technology at Thomas More University of Applied Sciences in Mechelen, Belgium. Why people use AI chatbots for mental supportAccording to Blease, AI chatbots offer some advantages that traditional therapy does not, the first being accessibility. “In many countries, including rich countries, there are very long waiting lists to see specialists in phychatry of even clinical psychology,” she said, “There are costs associated with that [psychological support] too”.  According to data from the World Health Organization, one in six people in Europe lives with a mental health condition, yet one in three people with a mental health conditions do not receive the treatment needed. In addition, Blease argued that opening up to a therapist can be extremely challenging for patients, particularly if the therapist comes from a higher social class. Conversely, therapists themselves may sometimes struggle to be empathetic.As a clinical psychologist, Van Daele also had a balanced view on the topic of AI for therapy. He argued that it is difficult to assess the relationship between users and chatbots for psychological support, as there is still insufficient data on the subject.  In addition, he stressed that the use of AI chatbots in therapy should not be demonised, but rather considered a “complementary” tool to traditional therapy. “It’s important to make sure that this is a choice and not something that people opt out of necessity because they don’t have access to established conventional service.” Why people should be careful when using AI chatbots for mental supportUsing AI chatbots for mental health support requires awareness of their limitations and risks, according to experts.Blease pointed out that AI chatbots are programmed to keep people engaged for as long as possible, which means this dependency may not be healthy for some users. “They [AI chatbots] are not designed for specialist mental healthcare,” she told Euronews. In addition, these models are often created by a specific group of people, reproducing biases associated with Western, educated, industrialised, and wealthy countries, Van Daele said. According to the professor, platform design can also pose significant risks when it comes to sharing sensitive information, as these tools do not follow the same privacy rules as healthcare professionals.Yet, despite these risks, both Blease and Van Daele agree that it does not make sense to remove AI chatbots altogether; rather, it is important to ensure that everyone knows how to use them safely.