Why embody artificial intelligence?
chat : meteoric adoption. Open AI It confirmed it had surpassed 1 million users just five days after launching the app, making ChatGPT the fastest adopted app ever. Although not confirmed by OpenAI, we estimate between 100 and 300 million users at the end of January 2023. Why is this adoption so successful? to a new technology? No, absolutely not: LLM (Large Language Model) has already been accessible through the API for two years. If not technology, what led to this enthusiasm? will be personified. Why embody artificial intelligence? What are the advantages? What are the disadvantages? explanations.
Why is LLM GPT3.5 released to the public in the form of a chatbot, ChatGPT. Indeed, the chatbot invites the user to believe that the former has cognitive capabilities and that we can ask him questions and get answers and that these answers will be correct, so a tool like chatGPT will replace traditional search engines like Google. However, it is clear that the answers that ChatGPT provides often leave something to be desired. But this result is quite natural: ChatGPT is not designed to answer questions. It is less designed to give correct answers. ChatGPT relies on an LLM whose job is to “simply” output the most likely word given a string of input words. Then introduce another new word. And so on. ChatGPT is a kind of autocomplete tool, similar to what our smartphone does when we type an SMS…but much more complicated. The ability to answer questions is, in a way, a phenomenon arising from the fact that the model has been trained on a very large set of documents and conversations. But with the risk that the answer is often far from reality… A chatbot aims to embody basic artificial intelligence.
So why incarnate? The embodiment of artificial intelligence is a topic of great interest in academia and industry, as well as in popular culture. Personification is the process by which humans assign human traits to non-human things, such as animals, plants, and machines. In the case of artificial intelligence, this means that humans attribute human characteristics such as speech, thought, emotion, and consciousness to these machines.
Embodiment of AI as it is designed has a huge advantage: it reduces barriers to use and adoption of the technology: everyone knows how to have a conversation, and therefore everyone will know how to use ChatGPT, thus build complex queries, and simply interact with the chatbot.
As Figure 1 shows, this anthropomorphism is largely responsible for the meteoric adoption of ChatGPT. Curiously, even if conversational ability proves anecdotal in the future, it has been amazing, even awe-inspiring, for the majority of us. Moreover, why haven’t other AI systems, such as Dall-E 2 or MidJourney for image generation, also released in 2022, been adopted so quickly? The reason is that everyone knows how to hold a conversation, but not everyone is an artist, photographer, or graphic designer. Similarly, end users have not used GPT3, even though it is as “smart” as ChatGPT, because it is difficult to use. Voice assistants like Siri, Alexa, and OK Google still have to go mainstream despite their ease of use, because they just aren’t “smart” enough.
The embodiment of artificial intelligence for widespread adoption by developers and creators
An analysis of all waves of innovation and industrial revolutions leads to a recurring pattern: scientific discoveries are made. On this basis, technological bricks are being built. At first, it was adopted by a minority. When it is adopted by the majority, great upheavals occur at the level of business and society. As we saw above, anthropomorphism on the one hand facilitates adoption by end users, but it also facilitates adoption by developers and entrepreneurs – essential to Schumpeter’s great creative synthesis.
In the era of mainstream adoption, innovation is synonymous with amassing. How to combine technology bricks to invent new uses or new business models. Creativity is synonymous with putting puzzle pieces together in a creative way. Innovation is synonymous with testing a lot of things, most of which are doomed to fail. Currently low APIs and no-code is one of the driving factors for the adoption of AI bricks by the majority. Companies like Hugging Face, a startup founded in 2016 by three Frenchmen in the US, and already valued at $2 billion, aim to democratize AI by providing already trained models via APIs or visual development interfaces. The years 2022 and 2023 are pivotal years in the history, development, and adoption of artificial intelligence, as for the first time highly sophisticated tools became readily available to developers and entrepreneurs.
We’ve seen it embody AI to accelerate its adoption, by users, developers, and entrepreneurs thus prompting the wonderful creative synthesis of AI. However, this anthropomorphism has some limitations and is not without risks.
First of all, and we mentioned it above, holographic AI leads to disappointment. The majority of gamers offering voice assistants (Google, Apple, Amazon) have opted for human voices – hence stereoscopic AI. This gives the user the impression that Alexa, Siri or Google have advanced cognitive capabilities – which is clearly not the case, which often leads to frustration and disappointment when using them. This first boundary is well known and is often called the “Unusual Valley” or strange valley. It’s a theory formulated by Japanese roboticist Masahiro Mori in the 1970’s that the more human-like a robot is, the more its flaws become apparent and evokes a strange and uncomfortable sense of cognitive dissonance. In this case, the robot is not judged as a human who successfully impersonates a human, but rather as a human who fails to behave normally.
Subsequently, the greatly aggravated anthropomorphism can lead to excesses. In particular, it may lead the user to believe that the AI they are interacting with is conscious or sentient. In the year 2022, Blake Lemoine, an engineer at Google, is caught up in his own game, and ends up convincing himself that LaMDA, Google’s conversational agent, has become sentient. The conversation between Lemoine and the AI is really amazing[1]and clearly shows the main risks.
Another danger of AI materialization is a direct corollary of rapid adoption. The first industrial revolution lasted for at least a century after the invention of the steam engine. The system (business, education, etc.) has had time to adjust so that revolution will finally be synonymous with human progress. The revolution in artificial intelligence, especially generative, is massive. We are facing an exponential Schumpeter. All professions that produce and work with content (text, image, speech, video, etc.) will be greatly affected. Some jobs, including white-collar jobs, so far spared from automation, will disappear almost overnight. Schumpeterian exponential destruction may not be creative destruction because the system will not have time to adjust.
Finally, the final danger of anthropomorphism and the rapid adoption of AI is one Close. Some of the aforementioned platforms have become essential. Today, a startup that needs large-scale AI capabilities can easily use the infrastructure of these platforms for almost free, and thus become highly dependent on them. Today’s unstable geopolitical environment, in which issues of national sovereignty seem increasingly important, makes Close To a rather problematic American or Chinese platform.
We recently ran a survey on LinkedIn asking whether AI should be embodied because it lowers barriers to adoption, even though it raises questions and poses risks. Almost 70% of the respondents disagreed.
Stereoscopic AI is rapidly accelerating to increase its adoption by both end users, developers and entrepreneurs. We may want to consider whether the scale and speed of this development is really in the best interests of all of us, individuals, businesses, and society as a whole.
[1] https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917