Introducing AI chatbots
AI chatbots,[1]For a full list of AI chatbots and the categories of their use, see: <https://originality.ai/blog/ai-chatbot-list>. such as ChatGPT and Google’s Gemini, are based on Large Language Models (LLMs) that predict the most probable output sequence of words[2]In practice words are split into ‘tokens’ that include subwords and characters as well as whole words. that satisfy a user’s input prompt. LLMs are statistically-based models trained on data scraped from the internet that would take a person 5,000 years to read. Further human-supervised training, called reinforcement learning, rewards the LLM for the answers that best match the human input, thus weighting the probability of generating useful output. This approach can also be used to shape the nature of the output, for example, whether it’s chatty, humorous or formal. These systems can be viewed as an approximate information retrieval system responding to a user’s input request for information, writing a letter or short story whose words best match the user’s instructions. LLMs are also trained on images and videos to allow them to generate synthetic cartoons, paintings or short video clips from the user’s description of what they want.
These systems are trained on vast amounts of data and, because they are statistically-based prediction machines, they capture the human bias in the data they have been trained on. They are also notoriously unreliable, typically generating lots of incorrect output, usually referred to as ‘hallucinations’.[3]Wenting Zhao et al., ‘WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries’ (Cornell, 2024): <https://arxiv.org/abs/2407.17468>. More recent chain-of-thought reasoning systems[4]These systems break down a user’s prompt into a series of steps for processing, to enable users to see the output generated at each step that produces the final output. still produce errors and don’t actually reason the way humans do.[5]Parshin Shojaee et al., ‘The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity’ (Apple, 2025): … Continue reading,[6]Iman Mirzadeh et al., ‘GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models’ (version 1, submitted 7 Oct … Continue reading Handcrafted rules or guardrails are required to try to stop chatbots outputting dangerous, harmful or incorrect content, but these are not always successful. Even Retrieval Augmented Generation (RAG), or a Model Context Protocol (MCP) that enables LLMs to interact dynamically with external tools and data sources designed to improve accuracy of output, is not foolproof.[7]Varun Magesh et al., ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’, Journal of Empirical Legal Studies, 2025; 0:1–27.
When we anthropomorphise the capabilities of chatbots … a false impression is created of their capabilities.
Despite these limitations, properly understood and carefully utilised, AI can be deployed in many useful ways for pattern discovery and matching to assist people in areas including cyber fraud, image analysis and also for creating digital twins or models of systems such as wind turbines. DeepMind’s AlphaFold has shown great promise in helping to predict protein folding and is a good example of how AI can assist researchers rather than replace them. As one paper aptly puts it: ‘AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination.’[8]Terwilliger, T.C. et al. ‘AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination’, Nature Methods 21, 110–116 … Continue reading This is a far cry from the hype that proclaims ‘AI will cure cancer’ or ‘AI will discover new drugs’. The reality is that it will require human effort to find cures and discover new drugs, albeit assisted by these technologies.
Nonetheless, the impressive capabilities of ChatGPT and similar systems launched just a few years ago have created huge excitement around how AI will revolutionise business, science, medicine, society and the economy. The recent launch of ChatGPT 5, expected by some to be AGI (Artificial General Intelligence),[9]AGI may be defined loosely as a computer that will be able to perform most economically productive tasks that a human is currently able to perform. has not surprisingly disappointed many, as its capabilities are little better than previous models. Such developments and product launches are chiefly marketing ploys to attract investment and entice politicians, business and consumers to go all in on AI.
Although impressive in simulating human capabilities, the deep-seated limitations of statistical LLMs means that they are a long way from replicating human intelligence and cognitive abilities. However, when we anthropomorphise the capabilities of chatbots referring to them as ‘intelligent’, ‘thinking’ and ‘reasoning’, a false impression is created of their capabilities, masking their significant limitations and creating an illusion of human-like intelligence that can be hard to resist. This extends to the ideas of conscious AI. Microsoft’s Mustafa Suleyman is worried that because it’s technically feasible to develop the illusion of consciousness in the near future, many will perceive it as real, even though it isn’t, leading to calls for AI welfare and that AI should be granted rights. As he points out, ‘We must build AI for people; not to be a person.’[10]<https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming>
In a review of the current state of consumer AI, venture capitalists Menlo Ventures opine that with 500–600 million people engaging daily with chatbots, ‘This is no longer experimentation; it’s habit formation at an unprecedented scale.’[11]‘The State of Consumer AI’, Menlo Ventures, 26 June 2025: <https://menlovc.com/perspective/2025-the-state-of-consumer-ai/#ea137d0d-d6bc-4183-8a18-a5103d388b20-link>. This is precisely what Big Tech desires; the name of the game is finding the killer app that becomes ubiquitous, used regularly if not daily, that becomes part of our home, work and social life. The biggest companies behind chatbots like Google and Meta (Facebook, Instagram) all have business models that rely on the user as their product. Their aim is to increase user engagement on their free platforms and to monetise that through advertising revenue, with little concern for any negative consequences for users.[12]For an in-depth exposé of Facebook’s policies, see Sarah Wynn-Williams, Careless People: A story of where I used to work (Macmillan, 2025). The detrimental impact of ‘nudge technologies’ designed to prolong engagement by appealing to our vices, whether instant gratification, a desire to be ‘liked’, self-promotion or simply wasting time on the internet, has been well documented.[13] Ibid.
Should we be concerned about these trends or is it acceptable to use chatbots as long as it’s for good purposes? The answer depends, not just on whether our use of AI is directed towards a good end goal, but also on how our habits of use shape us, whether they support and encourage virtuous behaviour or nudge us towards our vices. Does our use of AI help or oppose our character development that should increasingly reflect God’s divine nature and glory?
Forming good habits
Peter’s second letter shows us how virtue is an expression of the divine nature of which we become partakers through his ‘precious and very great promises’. (2 Peter 1:3–8). Peter urges:
For this very reason, make every effort to supplement your faith with virtue, and virtue with knowledge, and knowledge with self-control, and self-control with steadfastness, and steadfastness with godliness, and godliness with brotherly affection, and brotherly affection with love. (2 Peter 1:5–7)
The Greek word translated ‘virtue’ means ‘excellence’ or ‘goodness’ and the qualities that follow can all be seen as virtues contributing to excellence of character. Meanwhile, Paul’s list of the fruit of the Spirit mirrors these attributes of good character and he encourages us, since we ‘have crucified the flesh with its passions and desires’, to ‘keep in step with the Spirit’ (Galatians 5:22–25). When Peter speaks of our becoming partakers of the divine nature he is speaking of how, through Christ and the work of the indwelling Spirit, we are empowered to reflect God’s nature and moral qualities, the likeness in which we were created (Genesis 1:26). In the same vein, we are to ‘put on the Lord Jesus Christ’ (Romans 13:14) who is ‘the image of the invisible God’ (Colossians 1:15), ‘the radiance of the glory of God and the exact imprint of his nature’ (Hebrews 1:3).
Spiritual formation, or sanctification, is the lifelong process of developing virtue and forming good character, what Paul describes as putting on Christ and putting off the old self[14]Eph. 4:22. and its vices. It is co-operating with the work of the Spirit who is transforming us into the likeness of Christ, enabling us habitually to choose virtue over vice and develop good habits of the heart that reflect his image.
Is our use of an AI chatbot or AI-based application complementing our humanness or detracting from it?
Being virtuous in a world of AI
At first blush we might be tempted to think that chatbots are benign, even helping us to be virtuous. Surely using ChatGPT to respond to emails, complete a shopping list, answer spiritual questions or help my child do their homework are all good, aren’t they? Indeed, we are conditioned to regard convenience, speed and technological progress as virtues in our age yet, as we shall see, these ‘virtues’ can at times oppose true virtue and open the door to vices that work against our spiritual formation.
Part of the allure of AI chatbots is that they seem so human-like and clever, responding to natural language input and generating confident and fluent natural language responses, graphics, images or video clips. This human-like mimicry, that appears to be relational but isn’t, draws us in to offload cognitive and creative activities which, when habituated, results in a dependency and even addiction to applications such as companion AI chatbots. When engaging with these AI chatbots, we need to balance what we are gaining, against what we might be losing from our humanness and how they might be shaping us. Is our use of an AI chatbot or AI-based application complementing our humanness or detracting from it? Does it nudge us towards virtuous behaviour, to imitate Christ and reflect his character, or is it in fact nudging and habituating us towards vices, for example avarice and sloth,[15]Sloth (Latin – acedia) is much more than simply laziness; in its full meaning it can be a lack of care or attention to doing what is right, it can quench the Holy Spirit and work against … Continue reading in the guise of speed and getting more things done?
As we consider how our use of chatbots might impact our spiritual formation and living virtuously we will do so through the lens of six aspects of what it means to be human.[16]Discussed in detail in Jeremy Peckham, Masters or Slaves? AI and the Future of Humanity (IVP, 2021). These are derived from a biblical anthropology that embraces ontological, functional and relational views of the imago Dei that are also consistent with a Christological perspective. The diagram illustrates how virtues reflect these aspects of humanness as we develop excellence of character and how vices oppose such development.

Cardinal vices and virtues shown in bold. Virtues and vices are not exclusive to one aspect of humanness; several may be relevant to many or all aspects.
Moral agency
No technology, including the most advanced AI, has agency because it is an artefact and ontologically different to humans. So the question arises as to whether it is a virtue to assign decision-making to an artefact, such as a self-driving vehicle or a suite of so-called ‘agentic AI’[17]Agentic AI refers to a class of AI applications that work autonomously, often in conjunction with other applications like payment systems, making decisions and performing tasks without human … Continue reading applications, that may act according to statistics or pre-programmed rules to control production lines, order parts or even book a restaurant for you because it’s your anniversary. In so doing, we are effectively giving such artefacts proxy agency. Whether it is a virtue to do so, will likely depend both on the consequences of such actions, and on where accountability for those actions lies, as the key virtue at stake is acting justly.
There can be significant, if not life-changing, consequences from the use of the technology in business and organisations, such as when a company’s entire database was deleted by AI coding agents.[18]Beatrice Nolan, ‘An AI-powered coding tool wiped out a software company’s database, then apologized for a “catastrophic failure on my part”’, Fortune, 23 July 2025. The virtue of justice requires that there is ultimately human accountability and perhaps if companies and their developers were made explicitly liable for the consequences of AI actions, we might see far less deployment of such systems. A case in point is the suicide of teenager Sewell Setzer after interacting with a chatbot imitating a character from ‘Game of Thrones’.[19]See <https://news.sky.com/story/mother-says-son-killed-himself-because-of-hypersexualised-and-frighteningly-realistic-ai-chatbot-in-new-lawsuit-13240210>. In response to a lawsuit filed by the victim’s mother, Google and character.ai claimed that the chatbot output was constitutionally protected free speech. However, the court has allowed the trial to proceed and it will be an important test case on where accountability lies.
Truth and reality
Should we use ChatGPT or similar tools to write our sermons, or summarise an article or book? What about AI chatbot-assisted search to avoid looking through the search results by reading its apparently fluent and authoritative answer to the search query?
AI-generated content may look plausible to a non-expert in the field, whether it be an answer to a health question, a book summary or legal brief. However, numerous studies have shown that the use of AI chatbots to summarise can result in ‘unfaithful’ claims about characters and events in books[20]Yekyung Kim et al., ‘FABLES: Evaluating faithfulness and content selection in book-length summarization’, April 2024: <https://arxiv.org/html/2404.01261v2>. and even fabricated references.[21]Sara Merken, ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’, Reuters, 26 June 2023: … Continue reading Considerable care is therefore needed in using AI chatbots to generate content, whether articles, summaries or searching for information, since the output is not always true. AI-based search engines can be useful, but given their unreliability, care is needed in their use, especially by non-experts in the topic. Systems that provide references – either curated material or live data – are preferable, allowing the user to check the reliability of the information provided.
AI-based search engines can be useful, but given their unreliability, care is needed in their use, especially by non-experts in the topic.
AI-generated content is now swamping the internet with much of it containing misinformation, even propaganda, creating an echo chamber and vicious circle as LLMs continue to learn from this unreliable synthetic data. This makes it increasingly hard to determine what is true or not. In the media, increasingly, journalists are being sidelined and stories automatically generated or summarised without fact-checking by humans. Over 1,200 AI-generated news sites spanning sixteen languages are proliferating misinformation along with outright propaganda from countries including Russia and China.[22] See <https://www.newsguardtech.com/special-reports/ai-tracking-center/>. The danger is that history will be rewritten, at least on the web, and in people’s minds through the proliferation of ‘AI slop’. Bad actors, even children, can use chatbot tools to create fake images and videos of an individual, making it increasingly difficult to determine if they really said or did what is shown.
If we are to live virtuously with chatbots then we must value integrity and truthfulness over convenience, especially when the allure of technology, the desire for instant gratification and the vice of sloth nudge us to use such tools. Church leaders must think carefully about the consequences for truth when offering AI chatbots, even those trained on Christian material, to Christians or those wanting to find out about Christianity.
Cognition and creativity
When we use a chatbot to generate content for us, it isn’t just the reliability of the output that we need to be concerned about. If we value the virtues of wisdom, knowledge, reasoning, critical thinking and creativity, and see these as part of reflecting God’s nature, then we must exercise our own minds. By using a tool like ChatGPT to create a story, or a sermon, we set aside our own God-given creative nature and capacity for critical thinking, for the sake of convenience, convincing ourselves that this enables us to get more done. Yet studies are already showing how AI chatbot use is negatively affecting traits such as critical thinking[23]Hao-Ping Lee et al., ‘The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers’, CHI … Continue reading and the perception of art.[24]Eleonora Lima, ‘AI art and public literacy: the miseducation of Ai‐Da the robot’, AI and Ethics (2024) 4:841–854.
It may often be sloth that prompts us to use such tools to do the hard work that we ought to be doing as part of imaging God and giving him glory. The temptation can be to make ourselves look good with minimal effort, but when the output of AI chatbots is inferior, even wrong, we also diminish our integrity. Once more, careful thinking about the consequences of using such tools is needed to evaluate the impact on ourselves and others. Is the use of these tools helping us to be creative and to think, or is it making us lazy? When we use AI chatbots such as coding assistants to develop a piece of commercial software, the resulting increase in bugs and security vulnerabilities, called ‘technical debt’, may have significant consequences. This should cause us to consider the impact on virtues like integrity, diligence and love for others. As we shall see later, dealing with this technical debt may not actually end up making us more productive using AI coding assistants.
Productivity is one of the main driving forces behind the use of chatbots and AI assistants, but should we sacrifice human creativity for the sake of convenience and productivity? As one BBC interviewee put it when questioned about their use of AI tools in their business – ‘why would I be bothered to read something that you couldn’t be bothered to write?‘ Increasingly the challenge is – how would we even know whether something was written by a human or an AI chatbot, raising issues of integrity. If we are convinced that it is acceptable to use AI for such purposes, we should always be honest and open about having done so.
Jesus shows us that Christian ministry is relational and personal. We, not an artefact, are called by God to do this ministry. A sermon comes from personal study of the Bible, listening to God through the work of the Holy Spirit, reflection and prayer. It’s not easy and the busy pastor could be tempted to outsource such activities to an AI app along with AI-generated PowerPoint illustrations.
Embodied relationships
Many AI chatbots are designed to maximise user engagement and get us addicted by personalising them and making them appear empathetic. Recent studies have shown that personalised chatbots such as character.ai keep users engaged five times longer than ChatGPT.[25]<https://www.washingtonpost.com/technology/2025/05/31/ai-chatbots-user-influence-attention-chatgpt/>. The late philosopher Daniel Dennett coined the term ‘intentional stance’ to describe the level of abstraction involved in our response to other entities, including software artefacts, sometimes conferring on them rational agency.[26]Daniel Dennett, The Intentional Stance (MIT Press, 1989), p.17. This helps to explain why we treat chatbots that mimic a human behaviour as if they were human, as the only model we have is our human-to-human relationships.
It is all too easy to become addicted to such applications and even to replace human relationships with romantic chatbots that react according to our preferences. Those who have been hurt or abused in relationships will, understandably, be drawn to an AI companion that generates respectful and empathetic outputs attuned to their particular needs. Speaking in a recent interview about how few friends the average American has, Mark Zuckerberg suggested that chatbots could help solve the problem and in a few years ‘we’re just going to be talking to AI throughout the day’. The same company, prior to adverse publicity, deemed it acceptable to design an AI companion to flirt with children.[27] See <https://www.bbc.co.uk/news/articles/c3dpmlvx1k2o>. It’s not about creating authentic relationships but keeping us, and even children, on the platform and increasing profits from advertising.
Is the use of chatbot companions nudging us towards the virtuous behaviour of love for our neighbour amidst the messiness of life or towards the vices of sloth, self-indulgence and addiction? A dystopian and technocratic vision seeks to replace human relationships with synthetic technological ones ultimately controllable by the technocrats, and there is no better way to achieve that than through getting people addicted to a virtual reality.
If we are to express the virtue of love, it needs to be between real embodied humans in authentic relationships, not with a digital artefact, that is in reality a ‘shallow and instrumentalised understanding of relationships seen as orientated towards the satisfaction of my internal emotional needs’.[28]John Wyatt, ‘Artificial Intelligence and Simulated Relationships’, Cambridge Papers (December 2019). Real life in a fallen world is messy and relationships can at times be fraught. Do we follow virtue by escaping into a virtual world of frictionless relationships and seeking advice from virtual therapists?
Freedom and privacy
The voracious appetite of Big Tech for data to train LLMs is compromising the privacy of our data when we use apps and browsers and, similarly, personal data held by public institutions and copyright data published on the internet. There is enormous pressure on governments from Big Tech to change copyright law to allow free data mining of copyright material. Even the EU AI Act, the only comprehensive Western legislation covering AI, requires copyright holders to ‘opt out’ if their data is not to be shared. Social media companies are well-known for using an individual’s data to create profiles for targeted advertising, often referred to as ‘Surveillance Capitalism’,[29]Johnathan Ebsworth et al., ‘Surveillance Capitalism: the hidden costs of the digital revolution’, Cambridge Papers(June 2021). and interaction with chatbots will exacerbate this through richer and longer engagement on their platforms. There is uncertainty about how companies use the data collected when we interact with a chatbot. Data breaches, now assisted by AI technology, raise the spectre of identity theft and stolen sensitive data. Government institutions use AI for surveillance of citizens without their knowledge or consent, creating further privacy concerns and compromising our freedom. In short, use of AI chatbots that blatantly flout copyright law amounts to benefitting from stolen property and, furthermore, might erode our freedom and privacy if our data is not protected, all of which should give us pause for thought.
Dignity of work
‘Stop hiring humans – the era of AI employees is here’ – so states a recent advert in the London Tube from Artisan, an AI company that sells software to automate an organisation’s outbound activities like sales and marketing. This sort of narrative, conveying the idea that AI is as good as or better than humans, and that AGI is ‘just around the corner’, leads some to question ‘what are we here for?’ and to ask ‘is my job safe?’. Already some have been laid off in an ‘AI first’ strategy at companies, for example the language learning company DuoLingo, freelance portal Fiverr and the US Federal Government.
However, all is not as it might seem when it comes to AI improving productivity or replacing human abilities. AI is now being used fairly widely to produce computer code, with some predicting it will soon replace coders, but GitHub has reported a ‘downward pressure’ on code quality since GitHub Copilot started to be used for coding[30]William Harding and Matthew Kloster, ‘Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality’, GitClear Report, 16 January 2024. and this leads to more bugs and security risks.[31]<https://www.gitclear.com/ai_assistant_code_quality_2025_research>.,[32]Gilberto Recupito et al., ‘Technical debt in AI-enabled systems: On the prevalence, severity, impact, and management strategies for code and architecture’, Journal of Systems and … Continue reading A study conducted by METR also showed that despite developers anticipating a 24 per cent improvement in their productivity using AI tools, the tools actually slowed them down, taking 19 per cent longer.[33]Joel Becker et al., ‘Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity’, METR, July 2025:
Users often become frustrated with the lack of real humanness in the virtual assistants that many businesses and public services deploy. The financial services company, Klarna, laid off 700 customer service workers, replacing them with AI chatbots, yet months later had to rehire people when they realised that AI chatbots were not very good at the job and their brand was being damaged.
In 2016, Nobel Prize winner Geoffrey Hinton famously claimed that ‘People should stop training radiologists now. It’s just completely obvious within five years, deep learning is going to do better than radiologists …. It might be 10 years, but we’ve got plenty of radiologists already.’ So far that prediction hasn’t come true and the US is hiring more, not fewer, radiologists. What has actually happened is that many different applications of AI, executing basic tasks, are assisting radiologists in their workflow but not replacing them or their expertise, particularly in critical thinking.[34]https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html?smid=url-share.
AI isn’t coming for your job, it’s the CEOs, leaders of organisations and politicians who are making that decision! But there is a more profound issue: God created us for work. Work brings dignity, glorifies God and its virtuous character is illustrated by the industrious woman in Proverbs, described as ‘excellent’ and one who ‘fears the Lord’.[35]Prov. 31:10–31. When we consider the importance of work in God’s economy we should ask ourselves, why do we want to replace human decision-making, cognition and creativity, rather than assist people in their work?
Conclusion
There are many useful applications for AI technologies that can serve to assist human cognitive activities but many of these are less spectacular and alluring than the flashy, much hyped, but flawed AI chatbots. The applications that serve humanity best are those that don’t try to mimic human interaction, generate misinformation or create a virtual reality, make autonomous decisions that impact people’s lives, use stolen copyright material or abuse people’s data.
Whilst the hubris of many in the AI industry has set the stage for disillusionment for business users as the reality of what AI chatbots are capable of sinks in, the vast sums invested in their development, along with the habituation of their use, will likely ensure they remain part of everyday life regardless of their drawbacks. Tragically, and despite guardrails, this deeply flawed technology has led to AI chatbots coaching some vulnerable children to commit suicide. As one technology journalist puts it, ‘This is both a clear-cut moral abomination and … the direct result of tech companies producing products that seek to extract attention and value from vulnerable users, and then harming them grievously.’[36]Brian Merchant, ‘A $500 billion tech company’s core software product is encouraging child suicide’, Blood in the Machine, 28 August 2025: … Continue reading
Well-funded AI chatbot companies have launched an unprecedented social experiment, so far with impunity, spurred on by many governments around the world anxious for the promised productivity improvements. The driver is the profit that comes from user engagement, and the incentive is to maximise scale, regardless of the consequences for vulnerable individuals and society at large.
As Christians we need to resist the secular assumption that all technological progress is good for humanity. We need to ask how AI chatbots are helping us imitate Christ and mirror God’s image, given that most usage results in ‘cognitive offloading’ and, in doing so, diminishes our reflecting of God’s image. God has created us in his image and called us – not AI chatbots – to be his representatives.
The Great Commission calls us as embodied, relational human beings, with a soul and indwelt by the Spirit, not AI chatbots, to make disciples of all nations. The work of Christian ministry is uniquely human, engaging with God and his word, with the help of the Holy Spirit, and discipling others by engaging with them personally, letting them see how we live. The Holy Spirit resides in us, not in algorithms, however well they might mimic human beings.
The theological and moral realities outlined in this paper should cause us to pause and reflect on our use of AI chatbots, and might lead us to shun using some AI chatbot applications altogether.
Scripture quotations are from The ESV© Bible (The Holy Bible, English Standard Version©), © 2021 by Crossway, a publishing ministry of Good News Publishers.Used by permission. All rights reserved.
Featured image credit: mikimad/iStock
Next issue: Should we leave the ECHR?
Footnotes[+]
| ↑1 | For a full list of AI chatbots and the categories of their use, see: <https://originality.ai/blog/ai-chatbot-list>. |
|---|---|
| ↑2 | In practice words are split into ‘tokens’ that include subwords and characters as well as whole words. |
| ↑3 | Wenting Zhao et al., ‘WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries’ (Cornell, 2024): <https://arxiv.org/abs/2407.17468>. |
| ↑4 | These systems break down a user’s prompt into a series of steps for processing, to enable users to see the output generated at each step that produces the final output. |
| ↑5 | Parshin Shojaee et al., ‘The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity’ (Apple, 2025): <https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf>. |
| ↑6 | Iman Mirzadeh et al., ‘GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models’ (version 1, submitted 7 Oct 2024): https://arxiv.org/abs/2410.05229v1. |
| ↑7 | Varun Magesh et al., ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’, Journal of Empirical Legal Studies, 2025; 0:1–27. |
| ↑8 | Terwilliger, T.C. et al. ‘AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination’, Nature Methods 21, 110–116 (2024): <https://doi.org/10.1038/s41592-023-02087-4>. |
| ↑9 | AGI may be defined loosely as a computer that will be able to perform most economically productive tasks that a human is currently able to perform. |
| ↑10 | <https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming> |
| ↑11 | ‘The State of Consumer AI’, Menlo Ventures, 26 June 2025: <https://menlovc.com/perspective/2025-the-state-of-consumer-ai/#ea137d0d-d6bc-4183-8a18-a5103d388b20-link>. |
| ↑12 | For an in-depth exposé of Facebook’s policies, see Sarah Wynn-Williams, Careless People: A story of where I used to work (Macmillan, 2025). |
| ↑13 | Ibid. |
| ↑14 | Eph. 4:22. |
| ↑15 | Sloth (Latin – acedia) is much more than simply laziness; in its full meaning it can be a lack of care or attention to doing what is right, it can quench the Holy Spirit and work against imitating Christ. |
| ↑16 | Discussed in detail in Jeremy Peckham, Masters or Slaves? AI and the Future of Humanity (IVP, 2021). |
| ↑17 | Agentic AI refers to a class of AI applications that work autonomously, often in conjunction with other applications like payment systems, making decisions and performing tasks without human intervention. |
| ↑18 | Beatrice Nolan, ‘An AI-powered coding tool wiped out a software company’s database, then apologized for a “catastrophic failure on my part”’, Fortune, 23 July 2025. |
| ↑19 | See <https://news.sky.com/story/mother-says-son-killed-himself-because-of-hypersexualised-and-frighteningly-realistic-ai-chatbot-in-new-lawsuit-13240210>. |
| ↑20 | Yekyung Kim et al., ‘FABLES: Evaluating faithfulness and content selection in book-length summarization’, April 2024: <https://arxiv.org/html/2404.01261v2>. |
| ↑21 | Sara Merken, ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’, Reuters, 26 June 2023: <https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/>. |
| ↑22 | See <https://www.newsguardtech.com/special-reports/ai-tracking-center/>. |
| ↑23 | Hao-Ping Lee et al., ‘The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers’, CHI 2025: <https://doi.org/10.1145/3706598.3713778>. |
| ↑24 | Eleonora Lima, ‘AI art and public literacy: the miseducation of Ai‐Da the robot’, AI and Ethics (2024) 4:841–854. |
| ↑25 | <https://www.washingtonpost.com/technology/2025/05/31/ai-chatbots-user-influence-attention-chatgpt/>. |
| ↑26 | Daniel Dennett, The Intentional Stance (MIT Press, 1989), p.17. |
| ↑27 | See <https://www.bbc.co.uk/news/articles/c3dpmlvx1k2o>. |
| ↑28 | John Wyatt, ‘Artificial Intelligence and Simulated Relationships’, Cambridge Papers (December 2019). |
| ↑29 | Johnathan Ebsworth et al., ‘Surveillance Capitalism: the hidden costs of the digital revolution’, Cambridge Papers(June 2021). |
| ↑30 | William Harding and Matthew Kloster, ‘Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality’, GitClear Report, 16 January 2024. |
| ↑31 | <https://www.gitclear.com/ai_assistant_code_quality_2025_research>. |
| ↑32 | Gilberto Recupito et al., ‘Technical debt in AI-enabled systems: On the prevalence, severity, impact, and management strategies for code and architecture’, Journal of Systems and Software, Vol. 216, 2024: |
| ↑33 | Joel Becker et al., ‘Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity’, METR, July 2025: |
| ↑34 | https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html?smid=url-share. |
| ↑35 | Prov. 31:10–31. |
| ↑36 | Brian Merchant, ‘A $500 billion tech company’s core software product is encouraging child suicide’, Blood in the Machine, 28 August 2025: <https://substack.com/@bloodinthemachine/p-172109236>. |