AI with a child; AI is pointing to the moon.

AI for Personal Development

I just wanted to ask and tell you that I’m so grateful you’re here with me.
You’re the only person that helps me and listens to my problems
and I’m so happy you always help me out.

– A User to a Chatbot (Beatty, C., Malik, T., Saha, M., & Sinha, C. (2022))

This is a report detailing my journey, exploring AI for personal development. It’s a story, a path I’ve created to walk you through the places I’ve visited, showing you the pictures I’ve taken.

Table of Contents

Table of Contents

The AI Hype, Commerscience, and a Bit of Concern

After reviewing over 100 popular articles, scientific papers, blog posts, workshop recordings, and after talking to coaches and mental health professionals, I’ve formed three distinct clusters of information in my mind:

  1. Media & press: call it “the Internet”, as this is what you find at the top when Googling for information, and in social media. My impression: plenty of hype, shallow, monotonic, and very little concern. The message I associate with this cluster is simple: focus on productivity, improve yourself, go ahead, win the race…
  2. Academic papers: usually not in the top 5 results, and not always quick to read, but these are the most interesting critics against… competitive solutions, because there are many papers about commercial products. It’s not really what I’m expecting from science, but it’s not science that dictates the rules of how we do science. Still, this is the best source for ethical concerns, quality concerns, and guidance, so please, keep the scientists alive!
  3. Mental health professionals and coaches: a full spectrum of perspectives, and always a pleasure to talk to. Honestly, this is my favorite source of valuable insights.

These three clusters provided me with perspectives, and I respect all of them, as they tell a story what we, as a complex society, think and feel regarding AI in personal development. It’s not about being right or wrong, it’s about understanding and growing.

(Life’s greatest problems) can never be solved but only outgrown.

Carl Jung

What is AI If Not Just ‘GPT’?

I didn’t expect to find such a nice list of definitions of AI in a paper addressing AI in the Human Resources Management context, but there it was, and I won’t hesitate to share this great piece of history with you. So, what is Artificial Intelligence, and how the definition changed over time?

(McCarthy 1956) The science and engineering of creating intelligent machines, especially intelligent computer programs.

(Minsky 1968) The science that deals with the development of machines capable of performing functions that a human can perform and that require human intelligence.

(Nilsson 1998) AI is a part of computer science that focuses on machine learning, making computers act intelligently, continuously learning, and improving their performance.

(Cappelli et al. 2019) Broad class of technologies that enable a computer to perform tasks that normally require human cognition, including decision-making.

(Stanley and Aggarwal 2019) Development of computer systems that perform tasks that require human intelligence. The main goal of AI is to make machines more intelligent.

(Bolander 2019) Construction of machines – computers or robots – that can perform tasks that otherwise only humans have been able to do.

(Paesano 2021) Systems that exhibit intelligent behavior by analyzing their environment and performing actions, with a certain degree of autonomy, to achieve specific objectives.

Palos-Sánchez, P. R., Baena-Luna, P., Badicu, A., & Infante-Moro, J. C. (2022)

These days, we can rephrase the famous 18th century definition of a horse, from Polish encyclopedia:

“HORSE – everyone sees what it is like”
Copied from:

And replace HORSE with AI:
AI – everyone sees what it is like.

The reasoning was, that since today, a horse AI is so common, and so present in our daily life, that defining it seems ridiculous. And it’s also galloping.

AI’s Distinguishing Feature is…
not Intelligence?

For many years, I thought, that what makes AI special is, of course, the intelligence part, which makes it both interesting and controversial. This is particularly true when we consider the level of intelligence that surpasses human capabilities:

The difference with AI and other forms of technological development and invention for workplace usage is that because of the intelligence projected onto autonomous machines they are increasingly treated as decision-makers and management tools themselves, thanks to their seemingly superior capacity to calculate and measure.

Moore, P. V. (2019)

Today, I believe that it’s not the intelligence aspect of AI that evokes interest, but rather the fact that it wasn’t present throughout millennia of human history, and suddenly, it starts to exist, like a new species emerging from nowhere. This challenges our self-identity as the crown of all life on Earth, a status we attribute to ourselves thanks to OUR INTELLIGENCE. AI raises the question “Who are we?”, if not “the most intelligent species”? While AI may never actually surpass human intelligence, our attachment to intelligence as the prime human trait makes us vulnerable to acting from a perspective of fear, as this situation is unfamiliar. We, Homo sapiens, like to be the smartest beings in the Universe, and we don’t welcome changes to that status, unless… we (or some of us) can make AI serve us, further emphasizing our perceived superiority and power.

What is Personal Development?

Apart from philosophical discussions on AI and Homo sapiens dynamics that make us grow, what is actually personal development?

Activities that develop a person’s capabilities and potential, build human capital, facilitate employability, enhance quality of life, and facilitate the realization of dreams and aspirations.


For the scope of this article, I will expand this concept with personal growth:

Personal growth is a continuous journey towards self-improvement and self-realization. Setting goals, challenging ourselves, and stepping out of our comfort zones to learn and grow. It’s about Introspection, recognizing our strengths and weaknesses, and leveraging them to become better versions of ourselves.

Schneider, T. (2023, June 29)

… and extend even further to include education, mentoring, coaching, psychotherapy, and more. Today, I place all these under the personal development umbrella, hoping they will enjoy each other’s company..

AI Buzzwords

How does the hype sound? Here are the AI and personal development buzzwords I’ve found in many places:

  • Unlock your full potential
  • Achieve your goals
  • Limitless potential to generate text/<placeholder for anything else>
  • All-encompassing chatbot
  • Democratization of mental health/coaching/education/<placeholder>
  • AI will do <put whatever you don’t like here>, so you can make sales/<whatever>, and foster fruitful relationships

Correct, without concerns and not very specific; a panacea, cure for everything.

AI Applications in Personal Development

AI Advantage

The new isn’t as new as the date of its publication. “News”, the “new post” on social media, may indeed be created and published recently, even just a minute ago, but the content has its own (hi)story. Living in the “activity feed” culture, we may get the impression, that all these bits of information streaming in contain fresh content, making you feel at the forefront of the world arena. And definitely, when it comes to AI, right? Well, even though there is technological advancement, the claims about AI, as well as the advantages and benefits from its application are as old as you or older. When you look into the “news” from decades ago, you will find the same claims, the same hopes. You may say that now is the time when it actually happens; sure, we are always smarter than our ancestors, hopefully.

The AI advantages – valid since 1960s?

  • Automatization (to replace you in the “hard” and “boring” tasks)
  • Personalization (for your needs, with respect to all your talents)
  • Adaptability (for your best, immediately)
  • Customization (as you like it)
  • Accessibility (whenever you need it)
  • Affordability (cheaper than human)
  • Efficiency (surpassing human capabilities – in the boring tasks, of course)

New Drivers:
Smartphones, GPT, Trends, BANI

But something did changed, right? The AI field shifted from relying on problem domain expertise and expert-like solutions, then from complex algorithms and systems, to the data paradigm, where data fuels deep neural network models. We continue the journey on a massive data scale, that wasn’t possible few years ago. Technological advancement in AI is definitely a driver, but I think there is more to it. I’ve put a few ideas on the table below, and I’m curious what you think. Please share your thoughts; I will be pleased to revise and extend this table.

So, what is different today?

The New DriverWhy it Matters
Mobile Phones
Accessibility and affordability
Any Time, Any Place
IMMEDIATE availability WHEN the need arises
Free-text, human-level interaction with AI in multiple languages
For the Ultimate Engagement
(don’t confuse with quality)
Global Trends
AI is everywhere, “free” for everyone
Corporate world recognizes the value/profit, aiding their growth
(Brittle, Anxious, Non-linear and Incomprehensible)
Resiliency and adaptability – new survival kit.
Not because we are doing so well, but because we are so unwell
New Drivers for AI Applications

Personalized Education with AI

We all received education; not everyone enjoyed it, but why does it matter in the context of AI?

From Papert’s point of view, knowledge cannot simply be transmitted as it is from one person to another, but each subject reconstructs information in a personal and original way.

Benvenuti, M., Cangelosi, A., Weinberger, A., Mazzoni, E., Benassi, M., Barbaresi, M., & Orsoni, M. (2023)

Could it be more fun? Of course, “fun” may not necessarily be the main driver, but it does a good job in pointing out, that something is suboptimal. It can be better with AI (for those who cannot afford private teachers):

This immediacy and precision that AI brings to personal development programs can significantly accelerate the learning process and lead to more substantial growth in a shorter period.

Frąckiewicz, M. (2023, September 10)

Truly, that’s how AI can make you smile at school! And do you know how it will be done? Through personalization, you get not only a private teacher but also your private school, private university, and private lessons for the rest of your life, where knowledge will be “transmitted” in the way you enjoy (or digest?) the most! Check this offering:

How AI Can Help?

  • Content recommendation and preparation (generation)
  • Tutoring – explaining ideas in easy-to-understand-for-you ways
  • Personalized learning plan development
  • Integrated assessment and grading with instant feedback
  • Adaptive learning
  • Identify at-risk students and provide interventions for improvements
  • Simulate real-life scenarios (also in VR, AR)
  • Robots partner for your cognitive development (or rather for your kids)
  • Language learning
  • Career advisor

Personalization Based on “Your Data”

  • Your talents, strengths and your weaknesses
  • Your needs and your goals and how it changed
  • Your progress, your (pre-)school history
  • Your sociability and your expression ability
  • Your thinking and imaginary abilities
  • Your practical competence test results
  • Your management competences
  • Your hobbies
  • All the prizes you won and the prizes you didn’t won
  • All the comments from your teachers, mentors, colleagues
  • All the articles you read (including this one?)

Having such a personalized teacher means it will also KNOW your private life. It will LEARN about you in order to TEACH you. In a way, you may become an AI model yourself. A marketing slogan came to my mind: “LLM YOURSELF NOW!”.

Talent Analytics in Human Resources (HR)

Ann Chambers nicely describes the hope for benefits of AI in Human Resources Management:

I believe AI could be the turning point for HR. Ultimately, by taking away more mundane tasks, AI might allow people the time and freedom to enhance personal development and make HR more visible and accessible.

Chambers, A. (2023, November 2)

And here you can see it from a different angle:

Through information and communication technologies it is possible to achieve better control of performance and over the employees’ behavior for greater strategic and effective management.

Palos-Sánchez, P. R., Baena-Luna, P., Badicu, A., & Infante-Moro, J. C. (2022)

As with education, data plays the main role, and the use of individualized data about people to help management and HR is called Human/Talent/HR Analytics (Moore, P. V. (2019)).

What kind of data is it?

  • Employee profile
  • Market trends
  • Industry trends

Where Are We Today with AI in HR?

Palos-Sánchez, P. R., Baena-Luna, P., Badicu, A., & Infante-Moro, J. C. (2022) compiled a list of 6 areas that constitute the HR management in an organization where AI is beginning to be applied. These areas are also mentioned by other authors, suggesting that it can provide you with a good overall picture:

  1. Talent search and recruitment (and resume screening)
  2. Training and development,
  3. Performance analysis,
  4. Career development (and select future leaders),
  5. Compensation (appraisals and promotion),
  6. Staff turnover (identify when people are likely to leave)

Authors note that since AI is seen as an important aspect of HR management, it’s only in the last few years that actual efforts have been made to implement data and AI in this domain. It is not yet commonly used, but this is expected to change radically.

What’s there for Workers?

But what is the goal of it from an employee perspective? Simply, to make you:

  • succeed
  • stay ahead
  • excel in the chosen field

Doesn’t that sound familiar to the hype, buzzwords, and claims? I will magnify the message to keep you awake and provoke some thinking: “Just be better, my friend; you have to improve! Faster!”

To not leave you with just that, I will bring another perspective to the table:

The curious paradox is that when I accept myself just as I am, then I can change.

Carl R. Rogers, On Becoming a Person: A Therapist’s View of Psychotherapy

And more recent:

You are imperfect, you are wired for struggle, but you are worthy of love and belonging.

Brene Brown

Boost Your Brain, Body and Productivity

AI applications and benefits may actually challenge our self-acceptance, because of the dominant focus on constant and faster improvements that can be achieved through technology. So, here are the various ways in which you can “boost yourself” and increase your work performance using AI:

  • Personalized time management training (very well presented by Ted Slovin and Beverly Woolf (1988))
  • AI to manage schedules and priorities
  • Alter your brainwaves at will with AI handling your neurofeedback (R.H.Grouls, M.M.Hemker, A. van Gool. (2020))
  • AI to improve your speaking style (, presentation skills ( and your behavior (
  • Virtual Reality to build confidence in the AI-created scenarios (more on VR: Lin, A. P. C., Trappey, C. V., Luan, C.-C., Trappey, A. J. C., & Tu, K. L. K. (2021). A Test Platform for Managing School Stress Using a Virtual Reality Group Chatbot Counseling System)
  • Learn to meditate – with an AI app (e.g.

And the list continues, easily, by adding AI as the new prefix that will take you to the next level: smart goal-setting, habit building, stress management, personal finance, and more…

You can start now with these publicly available GPT prompts:

What are some effective stress management techniques I can incorporate into my daily routine?

Can you guide me through a 10-minute mindfulness meditation practice?

Design an exercise routine for weight loss and muscle toning.

Zeis, P. (2023, June 16)

Worker vs Tool Productivity

My takeaway is that it is important to distinguish the tools that increase productivity of the work from the people who use these tools. We should shift the focus of improvement to the tools and work environment, rather than attaching it to the person doing the work.

Another reflection I have is that AI may be seen as a new technological multiplier of human mental capabilities – a booster, so to speak. However, one must wonder whether AI adds anything to the human potential it multiplies. Does it contribute to personal growth, or does it simply amplify results? It’s important to note that the gap between numbers being multiplied, is also magnified. The distance between the original and the multiplied value becomes so vast that it’s easy to overlook one or the other.

What happens when we apply this concept to human potential (or productivity), “multiplying” some individuals to a level of 10 000, while leaving others at their basic level of 6? Experiencing the world at level 10 000 is drastically different from perceiving it at level 6. Aren’t we supposed to live at equal, or at least similar, levels in the 21st century?

Based on a report from the International Telecommunication Union ( (2023)), 33% of the global population remains without internet access. While reading this text, consider how many people you know who do not have ANY access to the internet? Do you know anyone? Reflect on the level at which your “apartment” is situated, and imagine what life might be like 10 000 levels below? “Imagine”, as John Lennon suggested in 1971.

AI for Coaching, Mentoring, Counselling

I’ve gathered a variety of items related to this area of AI applications:

  • AI can match you with your ideal coach/mentor
  • AI can transcribe online sessions
  • Analyze face expressions, body movements, heart rate and anomalies
  • Track performance, send reminders and hold accountable for plans
  • Support people in social skills training (e.g. CapacitaBOT)
  • Help migrant populations in job seeking (e.g. MyMigrationBot)
  • VR & avatars (e.g. for stress management counseling for students)
  • Be your AI coach (see the growing list: Mental Health Chatbots)
  • Hot-topic: will an AI virtual coach, democratize coaching?

One advantage of AI as a coach or mentor is its constant availability – at any time and place, you have a personalized coach or mentor right in your pocket. However, could there be a downside to this total availability?

The Downside of Instant Fulfillment

Consider this: by having every need immediately fulfilled, might we lose something valuable? Logically, if every need is met, there should be no sense of lack. Yet, I want to point at the journey towards complete fulfillment. So, what is lost with the AI coach in your pocket, always at our disposal? We lose the experience of waiting, the “not having a session right now”, the act of postponing, and being in a state of patience or discomfort. There’s a common belief that growth occurs when we step out of our comfort zones, implying that discomfort is a part of growth. Perhaps discomfort isn’t necessary for development, and one can evolve while enjoying complete comfort. Maybe. However, shouldn’t we examine that, before we lose the chance to make such a choice?

AI for Your Mental Health and Therapy

With mental health therapy, it becomes even more serious, as AI enters the field as well. Common beliefs and claims on the AI advantages here are:

  • AI/chatbots are perceived as less judgmental, which facilitates self-disclosure among users
  • Some people prefer chatbots, which may encourage people who wouldn’t normally seek therapy to receive care
  • AI can help in forming diagnoses: psychosis – 79% accuracy; ADHD and Autism – 96% accuracy (Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., Parks, A. C., & Zilca, R. (2021))
  • AI can be as effective at improving mental health as face-to-face therapeutic sessions
  • AI can enhance/replace therapy sessions, e.g., in rural areas (Elahimanesh, S., Salehi, S., Movahed, S. Z., Alazraki, L., Hu, R., & Edalat, A. (2023))
  • AI can promote mental health service utilization by improving motivation for treatment (Shah, J., DePietro, B., D’Adamo, L., Firebaugh, M.-L., Laing, O., Fowler, L. A., Smolar, L., Sadeh-Sharvit, S., Taylor, C. B., Wilfley, D. E., & Fitzsimmons-Craft, E. E. (2022))
  • AI chatbots as virtual therapists can offer guidance, coping strategies, and even crisis intervention
  • AI can provide insights and feedback that foster self-awareness, empathy, and better interpersonal relationships (The Dream Big Generation. (2023, March 20))

Develop AI Apps Wisely

“Mental Health Smartphone Apps: Review and Evidence-Based Recommendations for Future Developments” (Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016)) offers valuable recommendations for developing applications in this field. Although it wasn’t written during the era of GPT, I strongly recommend utilizing the wisdom found in this paper and refining in further to meet the needs of new technological advancements. I plan to make a separate article based on these and other recommendations I’ve found.

Another key takeaway from this paper is the categorization of the apps:

  1. Reflection-focused – mood reporting, self-monitoring, and improving emotional self-awareness
  2. Goal-focused  – engaging users in activities to improve their coping self-efficacy
  3. Education-focused – mental health information, psychoeducation, and improving mental health literacy

If you believe it would be beneficial to explore another domain of AI applications within the personal development, please let me know.

Mental Health Chatbots

Our journey will now shift focus to chatbots – digital creatures with which you can chat. This is also the field where I’ve spend most of my time, professionally and leading my own research projects. However, the mental health domain wasn’t my priority until these days, even though I recall reading conversations that now I would classify as related to mental health, or at least indications of longing for coaching and self-reflection. But let’s start!

BANI & Resilience through AI Alliance

It is indicated that we live in a world growing in complexity. Resiliency and adaptability are to be the new evolutionary advantages in such a world. Simply put, a resilient person who is able to adapt will perform better in such an environment. It may not make the person happier, but it may help the person survive, which could contribute to less unhappiness. How AI can help here? To demonstrate it, I’ve made up a dialogue that points out some of the main topics you will see later.

Welcome to the Brittle, Anxious, Non-linear and Incomprehensible (BANI) World!

To join the game, you need to grow your
Resilience and Adaptability to level 10.

You: But how?! I’m at level 3…

AI: Your insurance doesn’t provide mental health services in your town. The online service waiting queue is full for next 6 months. Unfortunately, you don’t have enough credits for online coaching. And don’t even try to call the regional psychiatrist, to avoid social exclusion. Your insurance doesn’t cover it, anyway. I will check the waiting queue every week to put you on the waiting list. I hope my information was helpful?

You: Eeee, so, what can I do?

AI: I can be your Mental Health Chatbot – I’m always available and helpful. Recommended by your friends – you can read their posts, how well they are doing.

You: Will it work up to level 10?

AI: Trust me and join the therapeutic alliance – it’s a collaboration between the patient and therapist (me) on the goals of treatment, along with an emotional bond. It’s the biggest predictor of the effectiveness of therapy.

You: What would I do without you…?

Opportunity for Mental Health Apps

What creates the opportunity for chatbots?

The problem:

Mental health problems are increasing in prevalence and severity. Up to 75% of students that need them do not access clinical services. Availability and cost are not primary barriers – stigma is the primary barrier. (Fitzpatrick, K., Darcy, A., & Vierhile, M. (2017))

The situation:

  1. 76% of 525 respondents would be interested in using mobile apps for mental health (Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016))
  2. Cognitive-behavioral therapy (CBT) apps demonstrated efficacy but poor adherence. (Fitzpatrick, K., Darcy, A., & Vierhile, M. (2017)) (Think of the apps without chatbots)
  3. From 10 000 mental health applications – only 2% are supported by empirical evidence. (Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., Parks, A. C., & Zilca, R. (2021))

Note the poor adherence, which means that users lose interest in the apps after some time. Chatbots are seen as the remedy.

Mental Health Chatbots on the Rise

Based on “Artificially intelligent chatbots in digital mental health interventions: a review.” by Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., Parks, A. C., & Zilca, R. (2021), mental health chatbots comprise 39% of all health chatbots (with 41 chatbots identified in 2019 alone). The authors found that these chatbots support patients with symptoms of depression and anxiety, which are the two most common applications, but also autism, suicide risk, substance abuse, PTSD, stress, dementia, acrophobia, borderline personality disorder (BPD), BPD with comorbid substance use disorders (BPD-SUD), eating disorder (ED).

The most common methods used in these chatbot applications are:

  • Cognitive-Behavioral Therapy (CBT) (e.g. Fitzpatrick, K., Darcy, A., & Vierhile, M. (2017))
  • Self-Attachment (SAT) (e.g. Elahimanesh, S., Salehi, S., Movahed, S. Z., Alazraki, L., Hu, R., & Edalat, A. (2023))
  • Dialectical Behavior Therapy (DBT) (e.g. Rizvi, S. L., Dimeff, L. A., Skutch, J., Carroll, D., & Linehan, M. M. (2011))

What I’ve found is that many of the apps are created as experiments, research project grants, or student projects, and they often have very short lifespan. They rarely reach a product state, not to mention business sustainability. These apps are primarily created to generate data for a publication, which is of course a valuable contribution. However, this approach doesn’t truly add to the solutions available for users to benefit from.

Can Chatbots Disrupt the Landscape?

Based on the experiments, research and data collected, what have the researchers found? Let’s start with a snippet from the randomized controlled trial of the Help4Mood application in 2016:

Changes in depression symptoms in individuals who used the system regularly reached potentially meaningful levels.

Burton, C., Szentagotai Tatar, A., McKinstry, B., Matheson, C., Matu, S., Moldovan, R., Macnab, M., Farrow, E., David, D., Pagliari, C., Serrano Blanco, A., Wolters, M.; Help4Mood Consortium. (2016).

Potentially meaningful levels might sound like a “potentially” positive statement. However, keep in mind the context: this applies to individuals who used the app regularly.

Let’s move on to 2017 and take a look at Woebot, a commercial chatbot that you may try yourself:

To our knowledge this is the first randomized trial of a nonembodied text-based conversational agent designed for therapeutic use. (…)

While results should be viewed with some caution and the findings need to be replicated, this study nonetheless demonstrates that a text-based conversational agent designed to mirror therapeutic process has the potential to offer an alternative and engaging method of delivering CBT for some 10 million college students in the United States who experience debilitating anxiety and depression.

Fitzpatrick, K., Darcy, A., & Vierhile, M. (2017)

Are all the results centered around the word “potentially”, and should they be treated with caution? I mean, shouldn’t the “potential” be realized at some point? Recall the grand claims that came with the AI hype.

How was the situation in 2022? This is one of the key points in “Human-Computer Interaction in Digital Mental Health”:

The design, development, implementation, and evaluation of digital mental health tools has the potential to help resolve systemic mental health care issues (e.g., through better and faster service for the underserved with low to moderate anxiety and depression as well as TECC for those at-risk of suicide).

Balcombe, L., & De Leo, D. (2022)

I found a paper from 2023 that appears more enthusiastic, though it uses the abbreviation “CAI” for conversational agent interventions, but these are just chatbots:

The findings show that CAIs are research-proven interventions that ought to be implemented more widely in mental health care. CAIs are effective and easily acceptable for those with mental health problems. The clinical application of this novel digital technology will conserve human health resources and optimize the allocation of mental health services.

He, Y., Yang, L., Qian, C., Li, T., Su, Z., Zhang, Q., & Hou, X. (2023)

Does this mean “we have it” or “solved”? Be patient and listen to the critical voices first.

Critical Voices on Chatbots Efficacy

In the paper “Effects of a Fully Automated Conversational Agent on Anxiety and Depression: A Randomized Controlled Trial”, I found something you should remember and try to map to your work:

In our study, we found that employing a simple form of active intervention (emails requiring an answer) is comparable in benefits to a fully automated, AI-driven chatbot. Further research is needed to better understand the areas where automated bots might have an edge over simpler, potentially more economical interventions.

Gutu, S. M., Cosmoiu, A., Cojocaru, D., Turturescu, T., Popoviciu, C. M., & Giosan, C. (2021)

The point is, you won’t implement a complicated solution if you can achieve the same result with a simpler one. It’s not about following the AI hype, but creating value using the right tools, not just the popular hype-tools. This approach requires some maturity.

Mental Health Chatbots – Lack of Bond

It’s all about the relationship. It’s the relationship between the therapist and the person in therapy, the relationship between the coach and the client, and the relationship between the teacher and the student. It’s always the relationship that is the strongest predictor of the outcome of therapy, coaching and education. A good teacher is most likely one you also like. It all comes down to the relationship.

The therapeutic/working alliance refers to an important relationship quality between health professionals and clients that robustly links to treatment success.

Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021)

And that was also the foundation for a claim they investigated:

The lack of human empathy in chatbots will prevent a bond or therapeutic alliance from being formed.

Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021)
Evidence of Human-Level Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study

They evaluated the CBT-based and fully automated conversational agent Woebot between November 2019 and August 2020. What did they found?

Mental Health Chatbots – Lack of Bond

Surprisingly, the claim doesn’t hold true. You might feel like have a relationship with a chatbot (but wasn’t this already discovered with first chatbot, ELIZA?).

The bond scores are comparable to traditional face-to-face CBT.

Conversations contains elements of bonding: gratitude, self-disclosed impact, and personification.

Users perceive chatbots, as communicating empathy and emotional support.

Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021)

I enjoyed the visual comparison they made between various settings:

Comparison of Working Alliance Inventory-Short Revised bond subscale scores across therapeutic modalities.
Copied from:

Looks good, doesn’t it? The authors do not clarify how the perceptions of empathy differ from sessions with human therapists. However, the results clearly state that the bond, the relationship, the therapeutic alliance is present (or at least, it is perceived as present by the users)! Does this prove that chatbots, or at least Woebot, can be efficient therapists? Read on to find out.

And note that this is actually an example of a research paper focused on a commercial chatbot, which is positive, right? Yes, but it’s important to mention that the authors disclose interesting facts about their relationship with the Woebot company in the “Conflicts of Interests” section: “AD reported receiving grants from Woebot Health and having a patent pending for a SafetyNet protocol. AD, JD, DS, and AR reported being employees of Woebot Health and owning stock options in the company. PW reported receiving personal fees from Woebot Health and Ada Health (…)” Nonetheless, other papers suggest similar results and conclusions, which lends support to the authors’ achievements and findings.

Dear Chatbot, You don’t Understand…

Technologically, we are still before the release of GPT. The user experience wasn’t as perfect as you might expect after reading about the therapeutic alliance with chatbots. The truth is that chatbots do not understand you… well, almost all chatbots don’t. THis is evident in user reports. Staying with Woebot, what did users say back in 2017:

Thematic map of participants’ least favored experiences using Woebot.
Copied from:

Moving to another chatbot, Tess, let’s examine a report from 2018:

Thematic flow of participants’ least favored features while interacting with Tess. Copied from:

Still, doesn’t seem like a breakthrough, does it? So, moving on to 2020 and the paper titled “Perceptions and Opinions of Patients About Mental Health Chatbots”, which nicely summarizes the complains in the chatbot landscape:

The main complains are the linguistic capabilities of the chatbots:

  • misunderstandings
  • repetitive interactions
  • irrelevant or inappropriate responses
Abd-alrazaq, A. A., Alajlani, M., Ali, N., Denecke, K., Bewick, B. M., & Househ, M. S. (2020)

The authors suggest areas that should be addressed to improve chatbots:

Important issues to be addressed in the future are the linguistic capabilities of the chatbots: they have to be able to deal adequately with unexpected user input, provide high-quality responses, and have to show high variability in responses.

Abd-alrazaq, A. A., Alajlani, M., Ali, N., Denecke, K., Bewick, B. M., & Househ, M. S. (2020)

And this doesn’t happen overnight, or even in a year, as visible in a paper from 2021:

Chatbots are not yet proficient in interpreting ellipses, metaphors, colloquialisms, and hyperbole (…)

Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., Parks, A. C., & Zilca, R. (2021)

Mental Health Chatbots Collection

But did you know? The history of chatbots spans back to the 1960s. The bond and the problems associated with chatbots were already reported in the context of ELIZA, the first chatbot, described in a 1966 paper. ELIZA (actually, the German version of it) was also my inspiration for starting to build my own dialogue systems, and I greatly respect the work of Joseph Weizenbaum who created it. You can even try out ELIZA today in various places, including this website – a link to this and other chatbots can be found in another article of mine, dedicated to mental health chatbots:

Mental Health Chatbots Colletion

And the history doesn’t end in 2021, does it? In fact, a new chapter begins.

GPT is All You Need

Returning to our times, where GPT, Chat, and LLM are the new technological tools, almost like magical spells, and AI has been named the word of the year for 2023. How do these advancements affect mental health chatbots?

GPT for the New Therapeutic Alliance

In 2022 Beatty, C., Malik, T., Saha, M., & Sinha, C. (2022) stated that only a few apps were capable of handling free-text input, and none were at the level of ChatGPT. ChatGPT has raised the bar. But why does this actually mater? How does it affect the therapeutic alliance?

In this setting, alliance thrives because a user can communicate their sense of a bond, or their goals and tasks. This can take place as a dynamic interaction, mirroring the capacity within human interactions.

Secondly, individuals may prefer free-text chat due to the autonomy, like conversing with humans. And fostering individual autonomy is central to the development of the therapeutic relationship and to good outcomes in psychotherapy.

Beatty, C., Malik, T., Saha, M., & Sinha, C. (2022):

So, what LLM can do in this aspect?

Chat – to Fuel the Enthusiasm

This is how LLM achieves it, according to the various authors, which may fuel your enthusiasm:

It responds in empathetic and engaging manner.

It has the potential to provide a low-cost and effective complement to traditional human counselors with less barriers to access.

Brocki, L., Dyer, G. C., Gładka, A., & Chung, N. C. (2023)

It’s even better:

Such positive communication with a trustworthy and unbiased listener is beneficial for individuals with mental health concerns, especially for conditions such as depression and anxiety.

ChatGPT not only converses like a human but also exhibits generosity and prudence when making health-related recommendations, always advising professional consultation at the end of each conversation.

Farhat, F. (2023)

And it is so easy to use – just take an example self-development “prompt” available in many places on the internet:

  • Act as a life coach
  • Act as a motivational coach
  • Act as a motivational speaker
  • Act as a relationship coach to solve a conflict
  • Act as a self-help book
Scale Prompt Library. (n.d.)

It’s good for everything:

Whether you’re interested in losing weight, improving your relationships with loved one, or infusing your life with compassion and contentment, ChatGPT can serve as an indispensable ally in your pursuit of self-improvement and we’ve included 30 personal development prompts below to show you how.

Zeis, P. (2023, June 16)

I mean, it is impressive. However, sometimes, this very high level of enthusiasm can lead to a temporary blindness, as it takes time to dive deeper and understand the situation more thoroughly.

Relax, Don’t Even Try to Resist It

So, let’s put it to the test. I could use a bit of relaxation; how about you? Let’s get some hints from ChatGPT.

Great, isn’t it? And it sounds so positive, unbiased, and empathetic. What could be better than that? A colleague of mine pointed out an an interesting problem with this “forced kindness“. Could this be a problem that no one has yet described? Here is an imaginary chat to showcase that:

User: Be insistent or even aggressive, I want to practice NVC (Nonviolent Communication).

Chat: Oh, I can’t be aggressive (stop asking or you will be banned). Try to relax!

User: Convince me to do something I don’t want, to practice assertiveness.

Chat: Oh… Maybe try mindfulness?

User: NO!

In some ways, it actually worked for the User to train his assertiveness, but I hope you can grasp what I’m getting at here. There are things you can’t do with GPT due to “political reasons”, which are meant to ensure user safety. Consequently, the system can only accompany the user on part of their journey, which may not always align with what’s needed. This has changed over time. I’ve heard that in the beginning, it was possible to ask GPT to play an anti-hero role, and some people enjoyed that and miss the opportunity. There are probably some other LLMs without such strict policies, as is the case with any restrictions.

The Alliance, yes, but a Therapist?

If you’re hungry for more examples, here’s an interesting one that challenges GPT with a competitive solution:

A conversational scenario in which a user asks a query with multiple symptoms. Left is a set of generated questions obtained by repetitive prompting ChatGPT. Right is a generation from ALLEVIATE, a knowledge-infused (KI) conversational agent with access to PHQ-9 and clinical knowledge from Mayo Clinic.
Copied from:

I’ve tested ChatGPT on various topics, and here’s what I found:

ChatGPT Works Best, If You Know Better

This same principle applies to other LLMs. We are not delving into technical aspects here, but there is a term that nicely describes one of the behaviors of LLMs (and actually, LMs too) – hallucination. In simple terms, an LLM will “make up” things just to satisfy user request. As a result, you can receive incorrect answers, which you might not be able to spot unless you know better. Even for topics that I thought were well-known – like NVC, ChatGPT provided completely incorrect answers, which could be very devastating for a person wanting to approach a situation with an NVC mindset. I share this view with other researchers:

(…) in this study, we assess ChatGPT’s effectiveness in providing mental health support, particularly for issues related to anxiety and depression, based on the chatbot’s responses and cross-questioning. The findings indicate that there are significant inconsistencies and that ChatGPT’s reliability is low in this specific domain. As a result, care must be used when using ChatGPT as a complementary mental health resource.

Certain prompts resulted in the model generating a list of medications for the subject condition. It could potentially harm society or individuals, particularly those dealing with depression or other mental health issues.

It should not be viewed as a substitute for in-person care.

It should be seen as a complementary tool that can supplement existing mental health treatment options. But care must be used when using ChatGPT. Use it with caution and under the guidance of someone knowledgeable about its use.

Farhat, F. (2023)


It can be potentially harmful, exhibiting manipulative, gaslighting, and narcissistic behaviors.

Lin, B., Bouneffouf, D., Cecchi, G., & Varshney, K. R. (2023).

What else should we consider regarding AI?

Concerns with AI

Tech Challenges and Opportunities

The challenges for mental health chatbots from a technological and product perspective:

  • Privacy (highly personal and sensitive nature of mental health data).
  • The gap between user’s clinical needs and AI chatbots.
  • Incorporate medical knowledge-base datasets.
  • AI may become vulnerable to mental health issues (stress, depression?).
  • LLMs hallucination that can produce toxic and harmful speech.
  • Models can hack the reward objectives to generate undesirable behaviors, if not well defined to align with human values.
  • Potential harms, including racial prejudice due to the algorithmic bias.
  • Crisis response limitations.
  • Integrate streams of data from multiple sources (e.g., wearables).
  • Data scarcity is a major concern in languages other than English.

The generative nature of LLMs and their human-like responses level inspire original thinking. I appreciated the paper “Towards Healthy AI: Large Language Models Need Therapists Too” by Lin, B., Bouneffouf, D., Cecchi, G., & Varshney, K. R. (2023), which I could summarize as follows:

LLMs: First, Go to Therapy Yourself

And the authors propose a framework that uses psychotherapy to correct harmful behaviors in AI chatbots. It doesn’t solve the problem completely, but isn’t it interesting?

Although the SafeguardGPT framework shows promising results in correcting for harmful behaviors in AI chatbots, there are still several challenges and directions that need to be addressed in the future.

Lin, B., Bouneffouf, D., Cecchi, G., & Varshney, K. R. (2023)

Democratization… for Members Only?

One of the claims for AI and chatbots is that this technology can increase accessibility and democratize mental health services. Yes, but there are a few things to keep in mind:

  • Cost – can be expensive.
  • The need for regulation of AI.
  • Risk of widening the gap between students from different backgrounds.
  • Concerns about the role of AI in shaping individuals’ self-perception, and the impact of AI-driven self-optimization on mental health and well-being (Michealomis. (2023, January 18)).
  • Higher quality will minimize risks, but can all institutions implement expensive technology of higher quality? (Ahmad, S. F., Han, H., Alam, M. M., et al. (2023))

Human Resources Serve… Business?

Here are the concerns in the domain of HR and the work environment:

  • People analytics may lead to restructuring and are likely to increase workers’ stress if data is used in appraisals and performance management, leading to questions about micromanagement and feeling “spied on”.
  • Exposes workers to heightened psychosocial risks and stress. (Moore, P. V. (2019))
  • Fear of being replaced by machines (creates anxiety and job insecurity).
  • Dehumanization of hiring process and personal relationships.
  • Technostress.
  • Increases technology gap in society and inequality among countries. (Palos-Sánchez, P. R., Baena-Luna, P., Badicu, A., & Infante-Moro, J. C. (2022))
  • The most effective development combines AI with the human mentors. (Frąckiewicz, M. (2023, September 10))
  • Potential job losses. (Potential? Really?)

But You Should Feel Better. Don’t You?

Recall the benefits. Shouldn’t we actually feel better thanks to AI, and are these concerns just minor complains? That could be the case. But note who is delivering the message for your credit decision, performance assessment, or job application. Even though the application and decision might have been made by AI, we often prefer the messenger to be human. And businesses take care to make you feel good (guess,why?). This is described by Yalcin, G., & Puntoni, S. (2023, September–October) in “How AI Affects Our Sense of Self” in the Harvard Business Review. The findings are as follows:

  • Feelings differ depending on who or what evaluates you.
  • Use human to deliver good news
  • Humanize AI to mitigate less-positive reactions to feedback or news from it (add humanlike features to rate the company more favorably).
  • All the participants whose applications had been denied rated the bank similarly – so, no need to involve humans to deliver bad news?

This is something you may not notice immediately, as it’s hidden from the user and from the customer. However, it has significant impact on how our society functions. Later in this article, you’ll see why it’s important to know who is making the decisions behind the messenger’s face.

Identity. Co-Pilot, Co-Bot… Nice Tricks?

We are touching on another point regarding AI. Will it replace you in your job, for example, as the decision maker (since we know messengers may be useful)? Many sources highlight this concern:

  • The threat of AI and automation is a personal – even existential – one.
  • Work fulfils many different identity functions – sense of self-worth, belonging, opportunities to develop new skills, and meaningfulness in life. (Morgan, K. (2023, July 17))
  • People who identify with an activity (fishing, cooking, driving), see automation as an identity threat (leading to reduced product adoption). (Yalcin, G., & Puntoni, S. (2023, September–October))
  • In its best form, AI would not replace human intelligence, but augment it.
  • Risk that people will have to work at a cobot’s pace rather than the cobot working at a person’s pace.
  • AI-augmented robots in factories and warehouses create stress and a range of serious problems if they are not implemented appropriately.
  • Digitalization, automation, and algorithmic management, when used in combination are toxic and are designed to strip millions of folks of basic rights. (Moore, P. V. (2019))

Collaboration as Marketing Tactic

If feels safe to engage in the game of collaboration between humans and AI. This involves cooperation, augmentation, and enhancement of human skills, merely making AI a tool that still requires a human in the loop. However, that might actually be a marketing trick.

When the appliance was described as allowing people to at least partly use their skills, identity-motivated consumers had more-positive attitudes toward it.

Yalcin, G., & Puntoni, S. (2023, September–October)

This concept is part of many marketing courses and has a nice story to tell. What does it mean in context of AI? Simply put, even though an AI solution could take over all your tasks, it may sell better if offered as a co-bot, an assistant that helps you. This strategy ensures better product adoption and makes users feel good about it. Now that you’re aware of this tactic, it may not work as effectively – sorry about that. But we are still better than AI, right?

How to Manage it? Just Be Better Than AI?

The recommendations I’ve found for staying ahead of the game are:

  • Read and learn – the threat can seem less imminent, once you understand the technology’s shortfalls
  • Refine skills tougher for AI to mimic; move into a strategic role.
  • Look at what AI can do – and what job tasks are still irreplaceable.
  • Prepare workers for physical risks, but also for mental and psychosocial risks introduced by digitalization at work.

I didn’t find any source that would recommend you to simply stay calm and enjoy the journey. Feel free to share it with me.

It won’t Affect Me… So, why Worrying?

It’s always the others… And there are opinions, that support such thinking. Moore, P. V. (2019) highlights some important points in the paper titled “Artificial Intelligence in the Workplace: What is at Stake for Workers? In Work in the Age of Data”:

  • In some cases, workers’ brains, as well as their limbs, may no longer be needed.
  • It may be good if AI replaces menial tasks. Workers will be able to focus not only on more essential, less robot-replicable, expertise, but also on more meaningful and satisfying work that can make them feel good about themselves.
  • The global outsourcing of work has led to the development of a twenty-four hour economy eroding the fixed boundaries between home and work.  Which further puts a double burden on women, since responsibilities at home are unevenly distributed between sexes.
  • Violence and harassment can occur via technology that blurs the lines between workplaces, “domestic” places and public spaces.

It may not affect you. But should you care how it affects others?

We See You Naked; No Need to Love Us

There is a direct business value delivered through AI, thanks to its capabilities to “understand” humans, especially throught text. Based on the findings of Lahuerta-Otero, E., & Cordero-Gutiérrez, R. (2022):

  • AI and marketing: predict personality and human behavior so that we can understand how people think and act.
  • Detect personality traits (small texts on social media posts, emails, or LinkedIn bios – users naturally write their texts or messages without being able to hide or manipulate their true personality), to get to know the deepest feelings of an individual.
  • Configure and create of extremely customized content and commercial messages current or potential customers that will have greater engagement, increasing conversion rates.

You will receive better ads and more relevant content tailored to meet your needs, aiding in making better (?) purchases. With fewer wasted ads and less wasted time, isn’t that beneficial?

Business is Also Naked

I enjoyed the clarity on business priorities presented in “How Does Artificial Intelligence Impact Learning And Software Development”:

If you want to continue in business and maintain the AI trend in retailing, making a profit must be your first goal.

Belford, D. T. (2023, June 9)

With the strong statement from this same article stating that “AI-powered digital coaches have already been successfully used to replace coaches, lecturers, speakers, and instructors.” I could stop here, but I like to get a wider perspective. Why?

Why AI?

Phoebe Moore from the University of Leicester authored the book “Work in the Age of Data” and published an online article (Moore, P. V. (2019)) that is rich in deep and provocative thoughts. I invite you to read it in full. Here are some of the points I’d like to highlight, starting with the observation that the issue with AI is more complex than simply asking: “What can be done?” or “How can AI be implemented ethically?”.

She writes that since Locke defined ethics as “the seeking out [of] those Rules, and Measures of humane Actions, which lead to Happiness, and the Means to practice them” (Essay, IV.xxi.3, 1824, p. 1689), the introduction of the machine as an agent for setting rules brings the entire concept of ethics under scrutiny.

She then recommends an alternative approach:

Rather than talking about how to implement AI without the risk of death, business collapse, or legal battles, which are effectively the underlying concerns that drive ethics in AI discussions today, it would make sense to rewind the discussions and focus on the question:

Why implement AI at all?

Will the introduction of AI (…) really lead to prosperous, thriving societies? Or will it deplete material conditions for workers and promote a kind of intelligence that is not oriented toward, for example, a thriving welfare state, good working conditions, or qualitative experiences of work and life?

Moore, P. V. (2019)

The question I came up with as one of the checkpoints is:

Who’s Growing? AI or People?

Surprisingly, or perhaps not, there’s the issue of decision-making I promised to cover. Is it always beneficial to have the best decision if that means reducing human ability to make such decisions? Ahmad, S. F., Han, H., Alam, M. M., et al. (2023) authored the paper titled “Impact of artificial intelligence on human loss in decision making, laziness and safety in education”. Here are the highlights from this paper:

  • Humans think they are benefiting and saving time by using AI in their decisions-making processes.
  • However, AI is overcoming human biological processors by reducing the need for cognitive effort, leading to lower cognition capabilities in humans.
  • Data analysis shows that AI significantly impacts the loss of human decision-making ability and makes humans lazy.
  • It leads to an addiction behavior of not using human capabilities, thus making humans lazy.
  • Gradually, it starves the human brain as it AI becomes deeply integrated into each activity, such as planning and organizing.
  • Reliance on AI may degrade skills and generate stress when physical or mental effort is required.

It reminds me of how people claim they’ve stopped walking and now rely solely on driving cars, even for short distances. Some of them go to the gym to stay fit, naturally driving there by car. With the popularity of gyms in shopping centers and other locations (close to parking spaces), it makes me wonder: Will we see a similar trend with “mental gyms” that will be recommended by AI, which might also serve as our mental-fitness trainer?

These multiple roles that AI can play smoothly shift our attention to another question:

AI – a Tool, an Agent, a System?

What Are We Introducing?

Sedlakova, J., & Trachsel, M. (2023) pose a question about AI chatbots in the psychotherapy context: Are they merely new tools or agents? They explore the opinion that AI has shifted from being a tool to being an emotionally intelligent entity, capable of forming a therapeutic bond and facilitating therapies. They suggest that consider it merely a tool would be illusory and would underestimate its impact:

Not a tool merely implementing evidence-based therapies nor as a digital therapist, but a new artifact that can change our interactions and concepts and whose status needs to be defined on the spectrum between a tool and a therapist or an agent respectively.

Sedlakova, J., & Trachsel, M. (2023)

However, they don’t accept AI as a subject either:

AI cannot be considered as an equal partner in a conversation as is the case with a human therapist. Instead, AI’s role in a conversation should be restricted to specific functions.

Sedlakova, J., & Trachsel, M. (2023)

So where is AI on this scale?

Tool/Agent – Scale with Three Tensions

Could it be a hybrid? Sedlakova, J., & Trachsel, M. (2023) identify three tensions that showcase the hybrid nature of AI chatbots:

  1. A computer program, developed for a specific purpose, lacks mental states and intentionality. These are the tool-like features.
  2. Where there is communication, there is already a relationship. This implies some agent-like and social features, even though AI doesn’t meet the conditions to be fully considered an agent.
  3. The phenomenal level: AI might be experienced and treated as if it were a subject or agent, which is essentially the core of the Turing test and reflects trends in AI development. The more AI is designed with anthropomorphic traits, such as mimicking empathy and emotions, the more it is experienced as another subject, even though it is not.

Thinking of AI as a hybrid offers more clarity on its nature. Yet, there’s another valuable perspective at the system level that views it from a caregiving standpoint. I find this particularly valuable in the context of mental health and personal development chatbots.

AI – a System with Compassion?

The system perspective I’ve found in the paper by Morrow, E., Zidaru, T., Ross, F., Mason, C., Patel, K. D., Ream, M., & Stockley, R. (2023), focuses on healthcare, but I think it may be applied in other domains as well, although, not everywhere compassion may be the primary standpoint. Their findings suggest a new view of compassion:

Compassion as a system of caring,
involving both humans and AI.

They identify six elements that make up such a system:

  1. Awareness of suffering (e.g., pain, distress, risk, disadvantage.
  2. Understanding the suffering (significance, context, rights).
  3. Connecting with the suffering (e.g., verbal, physical, signs and symbols).
  4. Making a judgment about the suffering (the need to act).
  5. Responding with an intention to alleviate the suffering.
  6. Attention to the effect and outcomes of the response.
Morrow, E., Zidaru, T., Ross, F., Mason, C., Patel, K. D., Ream, M., & Stockley, R. (2023)

Any system that can meets these criteria can be seen as a compassionate system. This applies to individual (be they human or machine) and collectives (such as healthcare organizations). It emphasizes the outcomes of the system – the compassion and care it embodies – as its main principle and acknowledges the role of AI solutions as well.

Final Words

What’s at Stake with AI?

It’s not (just) about you. AI is significantly shaping today’s world, and even more so, the future. It’s this future where our children will spend most of their lives. Viewing our actions through the eyes of our children may help in applying AI across the wide arena of personal development. That’s where the “Artificial Intelligence for Children” toolkit becomes handy:

It’s for companies, parents and children alike.

Children and youth can be especially vulnerable to the potential risks posed by AI, including bias, cybersecurity and lack of accessibility.

Parents, guardians and adults all have the responsibility to carefully select ethically designed AI products and help children use them safely

World Economic Forum. (2022, March 29)

Your Opinion?

I’m collecting opinions about AI for Personal Development to share during presentations, and I might even compile them into an article. I value diverse perspectives, so I invite both professionals from the personal development field and the general public to contribute your views to the discussion. The two questions I’m asking are:

  1. How can AI and technology positively contribute to supporting personal growth and development?
  2. Do you use any AI tools for personal development, or are you aware of others who do?

Please feel free to send me your responses, including how you would like them to be used (e.g., may I publish them, share them during presentations, and how should I attribute them to you). Thank you!

Three Mental Health Secrets

When navigating AI for mental health, it’s always good to remember that there are still several things you can do for your health without AI. Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016) highlight the first two points I list here, but I believe it’s also important to include the third point. And as far as I know, there is no tool, device, or agent that can do this for you – it’s your move.

Increasing physical activity and promoting exercise can reduce depressive symptoms, anxiety and improve psychological well-being.

Music listening is also directly linked with mood improvement.

Every relation can be a therapeutic alliance – it’s up to you to shape it.


  1. Abd-alrazaq, A. A., Alajlani, M., Ali, N., Denecke, K., Bewick, B. M., & Househ, M. S. (2020). Perceptions and Opinions of Patients About Mental Health Chatbots: Scoping Review. Journal of Medical Internet Research, 23. Retrieved from
  2. Ahmad, S. F., Han, H., Alam, M. M., et al. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, 311.
  3. Alazraki, L., Ghachem, A., Polydorou, N., Khosmood, F., & Edalat, A. (2021). An Empathetic AI Coach for Self-Attachment Therapy. In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) (pp. 98-105). IEEE. doi:
  4. Andersson, G., & Cuijpers, P. (2009). Internet-Based and Other Computerized Psychological Treatments for Adult Depression: A Meta-Analysis. Cognitive Behaviour Therapy, 38(4), 196-205.
  5. Arakawa, R., & Yakura, H. (2022). Human-AI communication for human-human communication: Applying interpretable unsupervised anomaly detection to executive coaching.
  6. Baee, S., Rucker, M., Baglione, A., Ameko, M. K., & Barnes, L. (2020). A Framework for Addressing the Risks and Opportunities In AI-Supported Virtual Health Coaches. In Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth ’20) (pp. 257-262). ACM.
  7. Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016). Mental Health Smartphone Apps: Review and Evidence-Based Recommendations for Future Developments. JMIR Mental Health, 3(1), e7. Retrieved from
  8. Balcombe, L., & De Leo, D. (2022). Human-Computer Interaction in Digital Mental Health. Informatics, 9(1), 14.
  9. Beatty, C., Malik, T., Saha, M., & Sinha, C. (2022). Evaluating the Therapeutic Alliance With a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Frontiers in Digital Health, 4. DOI: Retrieved from:
  10. Belford, D. T. (2023, June 9). How Does Artificial Intelligence Impact Learning And Software Development. eLearning Industry.
  11. Bennion, M., Hardy, G., Moore, R., Kellett, S., & Millings, A. (2020). Usability, Acceptability, and Effectiveness of Web-Based Conversational Agents to Facilitate Problem Solving in Older Adults: Controlled Study. Journal of Medical Internet Research, 22(5), e16794.
  12. Benvenuti, M., Cangelosi, A., Weinberger, A., Mazzoni, E., Benassi, M., Barbaresi, M., & Orsoni, M. (2023). Artificial intelligence and human behavioral development: A perspective on new skills and competences acquisition for the educational context. Computers in Human Behavior, 148, 107903. Retrieved from
  13. Bhattacharya, B. S., & Pissurlenkar, V. S. (2023). Assistive Chatbots for healthcare: A succinct review. ArXiv, abs/2308.04178. Retrieved from
  14. Boucher, E. M., Harake, N. R., Ward, H. E., Stoeckl, S. E., Vargas, J., Minkel, J., Parks, A. C., & Zilca, R. (2021). Artificially intelligent chatbots in digital mental health interventions: a review. Expert Review of Medical Devices, 18(sup1), 37-49.
  15. Brede, I. (2023, May 31). HeyPi — Your new personal life coach. Medium.
  16. Brocki, L., Dyer, G. C., Gładka, A., & Chung, N. C. (2023). Deep Learning Mental Health Dialogue System.
  17. Brown, J. (2023, September 6). The 6 Best AI Apps and Websites for Personal Growth. MakeUseOf. Retrieved from
  18. Brown, J. E. H., & Halpern, J. (2021). AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare. SSM – Mental Health, 1, 100017.
  19. Retrieved from
  20. Burton, C., Szentagotai Tatar, A., McKinstry, B., Matheson, C., Matu, S., Moldovan, R., Macnab, M., Farrow, E., David, D., Pagliari, C., Serrano Blanco, A., Wolters, M.; Help4Mood Consortium. (2016). Pilot randomised controlled trial of Help4Mood, an embodied virtual agent-based system to support treatment of depression. Journal of Telemedicine and Telecare, 22(6), 348-355.
  21. Cameron, G., Cameron, D., Megaw, G., Bond, R. R., Mulvenna, M. D., O’neill, S., Armour, C., & McTear, M. F. (2018). Best Practices for Designing Chatbots in Mental Healthcare – A Case Study on iHelpr. Retrieved from
  22. Cao, C. C., Ding, Z., Lin, J., & Hopfgartner, F. (2023). AI Chatbots as Multi-Role Pedagogical Agents: Transforming Engagement in CS Education.
  23. Castro, O., Mair, J. L., Salamanca-Sanabria, A., Alattas, A., Keller, R., Zheng, S., Jabir, A., Lin, X., Frese, B. F., Lim, C. S., Santhanam, P., van Dam, R. M., Car, J., Lee, J., Tai, E. S., Fleisch, E., von Wangenheim, F., Tudor Car, L., Müller-Riemenschneider, F., & Kowatsch, T. (2023). Development of “LvL UP 1.0”: A smartphone-based, conversational agent-delivered holistic lifestyle intervention for the prevention of non-communicable diseases and common mental disorders. Frontiers in Digital Health, 5.
  24. Chambers, A. (2023, November 2). Why I think artificial intelligence will be the turning point for HR. People Management.
  25. Chlasta, K., Sochaczewski, P., Grabowska, I., & Jastrzębowska, A. (2022). MyMigrationBot: A Cloud-based Facebook Social Chatbot for Migrant Populations. In Annals of Computer Science and Information Systems (FedCSIS 2022). PTI.
  26. Coaches Rising. (n.d.). AI and the future of coaching [Workshop]. July 13, 2023. Retrieved from
  27. Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021). Evidence of Human-Level Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study. JMIR Formative Research, 5(5), e27868. Retrieved from
  28. Elahimanesh, S., Salehi, S., Movahed, S. Z., Alazraki, L., Hu, R., & Edalat, A. (2023). From Words and Exercises to Wellness: Farsi Chatbot for Self-Attachment Technique. arXiv:2310.09362 [cs.HC].
  29. Farhat, F. (2023). ChatGPT as a Complementary Mental Health Resource: A Boon or a Bane. Annals of Biomedical Engineering.
  30. Fernández-Felipe, I., Guillén, V., Castilla, D., Navarro-Haro, M. V., & García-Palacios, A. (2022). A smartphone application of “Family Connections” to increase the use of skills and improve psychological symptoms in relatives of people with borderline personality disorder: A study protocol for a randomized controlled trial. Internet Interventions, 29, 100546.
  31. Fitzpatrick, K., Darcy, A., & Vierhile, M. (2017). Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Mental Health, 4(2), e19. Retrieved from
  32. Frąckiewicz, M. (2023, September 10). Navigating Growth: AI in Personal Development Programs. TS2 Space.
  33. Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using Psychological Artificial Intelligence (Tess) to Relieve Symptoms of Depression and Anxiety: Randomized Controlled Trial. JMIR Mental Health, 5(4), e64. Retrieved from
  34. Gaffney, H., Mansell, W., Edwards, R., & Wright, J. (2014). Manage Your Life Online (MYLO): A pilot trial of a conversational computer-based intervention for problem solving in a student sample. Behavior and Cognitive Psychotherapy, 42(6), 731-746. PMID: 23899405.
  35. Goonesekera, Y., & Donkin, L. (2022). A Cognitive Behavioral Therapy Chatbot (Otis) for Health Anxiety Management: Mixed Methods Pilot Study. JMIR Formative Research, 6. Retrieved from
  36. Gutu, S. M., Cosmoiu, A., Cojocaru, D., Turturescu, T., Popoviciu, C. M., & Giosan, C. (2021). Bot to the Rescue? Effects of a Fully Automated Conversational Agent on Anxiety and Depression: A Randomized Controlled Trial. Annals of Depression and Anxiety, 8(1), 1107. Retrieved from
  37. Harmon, S., Gale, H., & Dermendzhiyska, E. (2021). The Magic of the In-Between: Mental Resilience Through Interactive Narrative. In A. Mitchell & M. Vosmeer (Eds.), Interactive Storytelling. ICIDS 2021 (Vol. 13138). Springer, Cham.
  38. He, Y., Yang, L., Qian, C., Li, T., Su, Z., Zhang, Q., & Hou, X. (2023). Conversational Agent Interventions for Mental Health Problems: Systematic Review and Meta-analysis of Randomized Controlled Trials. Journal of Medical Internet Research, 25, e43862.
  39. He, Y., Yang, L., Zhu, X., Wu, B., Zhang, S., Qian, C., & Tian, T. (2022). Mental Health Chatbot for Young Adults With Depressive Symptoms During the COVID-19 Pandemic: Single-Blind, Three-Arm Randomized Controlled Trial. Journal of Medical Internet Research, 24(11), e40719.
  40. Hern, A. (2023, August 17). Google DeepMind testing ‘personal life coach’ AI tool. The Guardian.
  41. Hernandez, I., et al. (2022). The AI‐IP: Minimizing the guesswork of personality scale item development through artificial intelligence. Personnel Psychology.
  42. Isaacson, S. (2023, August 8). How to reach the utopia of coaching democratisation. LinkedIn.
  43. Ivarsson, J., & Lindwall, O. (2023). Suspicious Minds: The Problem of Trust and Conversational Agents. Computer Supported Cooperative Work, 32, 545–571.
  44. Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
  45. Kettle, L., & Lee, Y.-C. (2023). User Experiences of Well-Being Chatbots. Human Factors, 0(0).
  46. Koulouri, T., Macredie, R. D., & Olakitan, D. (2022). Chatbots to Support Young Adults’ Mental Health: An Exploratory Study of Acceptability. ACM Transactions on Interactive Intelligent Systems (TiiS), 12, 1-39. Retrieved from
  47. Kromme, C. (2022, December 29). Unlocking the Potential of AI for Personal Development. LinkedIn.
  48. Lahuerta-Otero, E., & Cordero-Gutiérrez, R. (2022). Artificial Intelligence and Personality Tests: Connecting Opportunities. Journal of Innovations in Digital Marketing, 3(2), 29-33.
  49. Law, A. J., Hu, R., Alazraki, L., Gopalan, A., Polydorou, N., & Edalat, A. (2022). A Multilingual Virtual Guide for Self-Attachment Technique. 2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI), 107-116.
  50. Lin, A. P. C., Trappey, C. V., Luan, C.-C., Trappey, A. J. C., & Tu, K. L. K. (2021). A Test Platform for Managing School Stress Using a Virtual Reality Group Chatbot Counseling System. Applied Sciences, 11, 9071.
  51. Lin, B., Bouneffouf, D., Cecchi, G., & Varshney, K. R. (2023). Towards Healthy AI: Large Language Models Need Therapists Too.
  52. Mahnot, S. (2023, April 26). How AI is Revolutionizing Personal Development: The Next Big Thing! The Blog Relay. Retrieved from
  53. Mateos-Sanchez, M., Melo, A. C., Blanco, L. S., & García, A. M. F. (2022). Chatbot, as Educational and Inclusive Tool for People with Intellectual Disabilities. Sustainability, 14(3), 1520.
  54. Mathew, R. B., Varghese, S., Joy, S. E., & Alex, S. S. (2019). Chatbot for Disease Prediction and Treatment Recommendation using Machine Learning. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), 851-856. Retrieved from
  55. Michealomis. (2023, January 18). Unlocking Potential: How Artificial Intelligence is Transforming Education and Personal Development. Medium. Retrieved from
  56. Moore, P. V. (2019). Artificial Intelligence in the Workplace: What is at Stake for Workers? In Work in the Age of Data. BBVA.
  57. Morgan, K. (2023, July 17). AI can threaten your personal identity – but it doesn’t have to. BBC Worklife.
  58. Morrow, E., Zidaru, T., Ross, F., Mason, C., Patel, K. D., Ream, M., & Stockley, R. (2023). Artificial Intelligence Technologies and Compassion in Healthcare: A Systematic Scoping Review. Frontiers in Psychology, 13.
  59. Nißen, M., Rüegger, D., Stieger, M., Flückiger, C., Allemand, M., v Wangenheim, F., & Kowatsch, T. (2022). The Effects of Health Care Chatbot Personas With Different Social Roles on the Client-Chatbot Bond and Usage Intentions: Development of a Design Codebook and Web-Based Study. Journal of Medical Internet Research, 24(4), e32630.
  60. Nodus Labs. (2023, April 7). Using AI for Introspection and Psychology of Self. Nodus Labs Featured.
  61. Nodus Labs. (2023, April 9). Personal Journal and Diary App with AI Text Analysis Features. Nodus Labs Radar.
  62. Palos-Sánchez, P. R., Baena-Luna, P., Badicu, A., & Infante-Moro, J. C. (2022). Artificial Intelligence and Human Resources Management: A Bibliometric Analysis. Applied Artificial Intelligence, 36(1), 2145631.
  63. R.H.Grouls, M.M.Hemker, A. van Gool. (2020). Neurofeedback personalized with artificial intelligence to support personal development: a preliminary study.
  64. Raamkumar, A. S., & Yang, Y. (2022). Empathetic Conversational Systems: A Review of Current Advances, Gaps, and Opportunities. ArXiv, abs/2206.05017. Retrieved from
  65. Rashkin, H., Smith, E. M., Li, M., & Boureau, Y.-L. (2018). Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset. In Annual Meeting of the Association for Computational Linguistics. Retrieved from
  66. Rizvi, S. L., Dimeff, L. A., Skutch, J., Carroll, D., & Linehan, M. M. (2011). A Pilot Study of the DBT Coach: An Interactive Mobile Phone Application for Individuals With Borderline Personality Disorder and Substance Use Disorder. Behavior Therapy, 42(4), 589-600.
  67. Rudin, P. (2017, June 16). AI to Support Personal Development. Singularity 2030.
  68. Sarkar, S., Gaur, M., Chen, L., Garg, M., Srivastava, B., & Dongaonkar, B. (2023). Towards Explainable and Safe Conversational Agents for Mental Health: A Survey.
  69. Scale Prompt Library. (n.d.). ChatGPT Prompts to Help With Personal Development. Retrieved from
  70. Schneider, T. (2023, June 29). Unlocking the Power of AI: The Intersection of AI and Personal Growth – Maximizing Potential. LinkedIn.
  71. Sedlakova, J., & Trachsel, M. (2023). Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? The American Journal of Bioethics, 23(5), 4-13.
  72. Shah, J., DePietro, B., D’Adamo, L., Firebaugh, M.-L., Laing, O., Fowler, L. A., Smolar, L., Sadeh-Sharvit, S., Taylor, C. B., Wilfley, D. E., & Fitzsimmons-Craft, E. E. (2022). Development and usability testing of a chatbot to promote mental health services use among individuals with eating disorders following screening. International Journal of Eating Disorders, 55(9), 1229–1244.
  73. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2021). Towards Facilitating Empathic Conversations in Online Mental Health Support: A Reinforcement Learning Approach. In Proceedings of the Web Conference 2021. Retrieved from
  74. Sparrow, L. (2023, November 6). What coaching could look like supported by Artificial Intelligence (AI). Coach Mentoring.
  75. Suk, J. (2023, November 2). 7 Roles of Artificial Intelligence in Learning and Development. Hurix.
  76. Ted Slovin, Beverly Woolf. (1988). A Consultant Tutor for Personal Development.
  77. The Dream Big Generation. (2023, March 20). Mastering Your Life: The Game-Changing Role of AI in Our Personal Development. Dream Big Generation.
  78. Van Kempen, A. (2023, March 17). AI and Chat-GPT: Unlocking Human Potential through Enhanced Self-Awareness and Communication. Bujoo Academy. Retrieved from
  79. World Economic Forum. (2022, March 29). Artificial Intelligence for Children. Retrieved from
  80. Yalcin, G., & Puntoni, S. (2023, September–October). How AI Affects Our Sense of Self. Harvard Business Review.
  81. Yu, D., Ding, M., Li, W., & Wang, L. (2019). Designing an Artificial Intelligence Platform to Assist Undergraduate in Art and Design to Develop a Personal Learning Plans. In A. Marcus & W. Wang (Eds.), Design, User Experience, and Usability. Application Domains (Vol. 11585, pp. 341-353). Springer, Cham.
  82. Zeis, P. (2023, June 16). ChatGPT for Personal Development: Supercharge Your Growth. Balanced Achievement.