{"id":1645,"date":"2023-04-12T14:54:27","date_gmt":"2023-04-12T12:54:27","guid":{"rendered":"https:\/\/valgrai.eu\/?p=1645"},"modified":"2023-04-12T14:54:27","modified_gmt":"2023-04-12T12:54:27","slug":"why-the-intelligence-of-chatgpt-does-not-know-how-to-solve-this-problem","status":"publish","type":"post","link":"https:\/\/valgrai.eu\/en\/2023\/04\/12\/why-the-intelligence-of-chatgpt-does-not-know-how-to-solve-this-problem\/","title":{"rendered":"Why the &#8216;intelligence&#8217; of ChatGPT does not know how to solve this problem?"},"content":{"rendered":"<p>During a pleasant lunch with my colleagues Federico Barber, Antonio Garrido, and Antonio Lova, the conversation revolved around ChatGPT, including its unexpected outcomes, potential risks, and inaccuracies. Antonio Garrido shared that he had consulted ChatGPT for a problem utilized in the Artificial Intelligence Techniques course, but ChatGPT was unable to provide a solution. This inspired me to test ChatGPT by presenting the problem and investigating the reasons for its inability to identify the answer.<\/p>\n<p>The problem in question is the following:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1646\" src=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/1.png\" alt=\"\" width=\"850\" height=\"462\" srcset=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/1.png 850w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/1-300x163.png 300w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/1-768x417.png 768w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/1-600x326.png 600w\" sizes=\"auto, (max-width: 850px) 100vw, 850px\" \/><\/p>\n<p>After asking ChatGPT the question, the response provided was:<img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1648\" src=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/2.png\" alt=\"\" width=\"850\" height=\"924\" srcset=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/2.png 850w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/2-276x300.png 276w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/2-768x835.png 768w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/2-600x652.png 600w\" sizes=\"auto, (max-width: 850px) 100vw, 850px\" \/><\/p>\n<p>It is apparent that ChatGPT&#8217;s reasoning in its response is sound, but its conclusion is erroneous, resulting in an incorrect answer. Consequently, ChatGPT&#8217;s &#8216;intelligence&#8217; fails to deduce an accurate solution from the available information, unlike a human&#8217;s ability to do so.<\/p>\n<p>Upon informing ChatGPT of its mistaken response, it generates a new response supported by logical reasoning.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1650\" src=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/3.png\" alt=\"\" width=\"850\" height=\"960\" srcset=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/3.png 850w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/3-266x300.png 266w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/3-768x867.png 768w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/3-600x678.png 600w\" sizes=\"auto, (max-width: 850px) 100vw, 850px\" \/><\/p>\n<p>The argumentative structure is still valid, but in its reply and also in its argumentation, something strange happens: it incorporates as possible dates of Ann&#8217;s birthday May 15, 16, and 19, June 18 July 14 and 16, and August 14, 15, and 17 not contemplated in the statement as possible solutions. A human intelligence will not consider alternatives that have not been provided to it as possible solutions. What about the &#8216;ChatGPT intelligence&#8217;?<\/p>\n<p>We shouldn\u2019t forget that ChatGPT, like other Large Language Models (LLM), is a probabilistic language generator that has most commonly used the transformer architecture (Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2017-06-12). &#8220;Attention Is All You Need&#8221;. arXiv:1706.03762;\u00a0 He, Cheng (31 December 2021). &#8220;Transformer in CV&#8221;. Transformer in CV. Towards Data Science).<\/p>\n<p>As Christopher Manning says, \u2018Though trained on simple tasks along the lines of predicting the next word in a sentence, neural language models with sufficient training and parameter counts are found to capture much of the syntax and semantics of the human language. In addition, large language models demonstrate considerable general knowledge about the world, and are able to &#8220;memorize&#8221; a great number of facts during training\u2019<sup>\u00a0 <\/sup>(Manning, Christopher D. (2022). &#8220;Human Language Understanding &amp; Reasoning&#8221;. Daedalus).<\/p>\n<p>Does Manning&#8217;s description of the ability to memorize suffice to qualify LLM-based systems as &#8216;intelligent&#8217; or &#8216;mindful&#8217;? In my view, it does not. While LLM technology is undoubtedly fascinating and has yielded impressive results, such as its capability to search for information and present it in a comprehensive and understandable format, showcasing its effectiveness in solving specific problems, particularly when integrated with other AI technologies, it is premature to claim that ChatGPT or other LLM-based systems possess consciousness. As with most AI techniques, LLM has a long way to go before reaching such a level of sophistication.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1652\" src=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/4.png\" alt=\"\" width=\"850\" height=\"875\" srcset=\"https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/4.png 850w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/4-291x300.png 291w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/4-768x791.png 768w, https:\/\/valgrai.eu\/wp-content\/uploads\/2023\/04\/4-600x618.png 600w\" sizes=\"auto, (max-width: 850px) 100vw, 850px\" \/><\/p>\n<p>In its new answer, it still considers dates that are not included among the options given as a possible solution, July 14 or August.<\/p>\n<p>While not all humans possess the ability to solve the presented problem, many can do so. Through a trial and error approach, a human or another AI technique could discover the solution within nine or fewer steps simply by iteratively trying each of the ten alternatives. In case a wrong answer is chosen, the human or AI would discard it and attempt another, and eventually, the solution would be found within nine steps or less. However, it appears that ChatGPT deviates from finding the solution by incorporating new data in each new response, which was not among the possible solution options.<\/p>\n<p>I believe we can conclude that Large Language Model-based systems are a powerful and fascinating Artificial Intelligence technology with surprising results that will undoubtedly continue to amaze us. However, I think it is risky to consider these systems as having consciousness or general intelligence. It is crucial to remember that there are many other AI techniques that we can use to develop intelligent systems. Combining various methods may lead to better intelligent systems that can help solve problems that require intelligence and benefit humanity. Nonetheless, it is essential to emphasize that all AI techniques, including LLM such as ChatGPT, must prioritize human-centeredness and be regulated to ensure ethical principles. Responsible Artificial Intelligence has an important role to play in the field of AI in general and in the area of LLM, such as ChatGPT, in particular.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>During a pleasant lunch with my colleagues Federico Barber, Antonio Garrido, and Antonio Lova, the conversation revolved around ChatGPT, including its unexpected outcomes, potential risks, and inaccuracies. Antonio Garrido shared&#8230;<\/p>\n","protected":false},"author":10,"featured_media":1655,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[130],"tags":[],"class_list":{"0":"post-1645","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-opinion-3"},"acf":[],"_links":{"self":[{"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/posts\/1645","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/comments?post=1645"}],"version-history":[{"count":2,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/posts\/1645\/revisions"}],"predecessor-version":[{"id":1660,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/posts\/1645\/revisions\/1660"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/media\/1655"}],"wp:attachment":[{"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/media?parent=1645"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/categories?post=1645"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/valgrai.eu\/en\/wp-json\/wp\/v2\/tags?post=1645"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}