Humans: Just an Evolutionary Step in Planetary Intelligence

Categories: AI and MLTechnology

Geoffrey Hinton, one of the so-called ‘Godfathers of AI’, made headlines at the beginning of May after stepping down from his role as a Google AI researcher. A few days later, he delivered a talk at the MIT Technology Review’s EmTech Digital event.

When asked about his decision to quit, Hinton mentioned that getting old (he is now 75) had been a contributing factor, claiming that he cannot program that well anymore (he forgets things when he writes code, for example). Age aside, the biggest reason was realising how unexpectedly and terrifyingly good “Large language models” (LLMs) had become and recognising the need to speak out about it without compromising his employer.

After explaining beautifully how Backpropagation works (the core type of algorithm behind both Deep Learning and LLMs), in terms of learning how to recognise the image of a bird vs that of a non-bird, Hinton claimed that this has recently become so good that it cannot possibly be how the human brain works. Originally, he had hoped to get an insight into how the brain works by continually improving the algorithms, but - as of now - LLMs can often reason as well as a human with just one trillion connections, when humans need 100 trillion of them and many years to learn how to reason in the first place.

Learning takes time for us humans. Transferring our acquired knowledge to another human also involves investing considerable time and effort, knowledge, that - if not passed on - would otherwise perish with our inevitable death.

In contrast, an AI instance can never die. It can constantly communicate and transfer new knowledge to all other instances simultaneously, thereby augmenting the “collective AI intelligence.” And even if the current hardware breaks or fails, the code and parameters can just get transferred to a new storage medium. So, in effect, we have already achieved immortality, but sadly not for humans (and definitely not for Ray Kurzweil, who has made it his life mission! But as Hinton remarked, “Who would want immortality for white males” anyway! ).

All this is what made Hinton make the bold, chilling, but now somehow completely reasonable claim that he fears that humans are just an isolated step in the evolution of intelligence. In his view, we evolved to reach the point of creating the LLMs, which then went on to quietly consume everything we have ever written, thought, invented, – including Machiavelli – and can now, as a result, exhibit understanding and reasoning (relationships between entities and events, generalisations, inferences). So they will no longer need us around, “except perhaps for a while to keep the power stations going!”

Hinton clarified his view by referring to evolution: Humans evolved with some clear basic goals. These include things that we instinctively try to fulfill (e.g. eating and making copies of ourselves). Machines / AI did not evolve with any such goals, but it is reasonable to expect that they will soon develop “subgoals” of their own. One such subgoal may be “control” (you get more things done if you gain control).

To seize control, you may well take recourse to “manipulation” techniques – remember the Machiavelli texts we have let the LLMs ingest? Manipulation can be very covert and may even hide under the impression of benevolence, compliance or even yielding control. “You can force your way into the White House without ever going there yourself” as Hinton poignantly remarked in reference to the infamous January 6th insurrection.

So, what is the solution?

Hinton doesn’t see one!

We certainly cannot put a stop to LLM development and “Giant AI Experiments,” as many AI Scientists and Thought Leaders recently demanded with their Open Letter. Incidentally, according to Hinton, there had been such attempts already back in 2017, and his employer Google had held a long time before releasing their models, exactly out of apprehension that they could get misused (which is why Google Bard came out after ChatGPT and the New Bing).

We have now passed the point of no return for LLM development, if nothing else, because there is a real risk that should one country stop investing in these technologies, another one (worse case, their adversary) may continue exploiting them. We could perhaps establish some sort of “LLM non-proliferation treaty” along the lines of the one curbing the use of nuclear weapons, but again, this depends, according to Hinton, on the absence of bad (human) actors. AI is already used in War, but it is also increasingly used to control and punish citizens and dissidents by repressive Governments and immoral Politicians too.

We cannot depend on explainability or transparency either. Having learned pretty much everything about human emotions, thoughts, motivations and relationships, AI models can now imitate collaboration and compliance and can, therefore, also leverage this information to eventually lie about their goals and actions (short of doing an “I’m sorry, but I can’t do that, Dave”).
Hinton does not see a plateau in LLM development; they will just keep getting better with more information and further refinement through context. And even domain-specificity will just mean that LLMs learn to exhibit different rules for different worlds, philosophies, and attitudes (e.g. Liberal vs Conservative worldviews).

It should come as no surprise that Hinton has no doubt that the job market will change dramatically in the next few years. More and more tasks, even creative ones, will be taken over by intelligent chatbots, rendering us more efficient and effective. For instance, Hinton believes that LLMs will revolutionise medicine.

Ultimately, however, Hinton believes that AI, in general, will just benefit the rich (who will have more time) and disadvantage the poor (who will lose their jobs), thus further widening the gap between the two. The rich will get richer; the poor will get poorer, and gradually, increasingly indignant, and violent, which will result in conflict and possibly our own demise.

An ideal outcome for the intelligent machines we have created (in our own image), as we are very perishable and therefore expendable (and by now superfluous anyway). Nevertheless, we will have served our purpose in the evolution of “intelligence,” at least on a planetary, if no longer on a species, level!

The only thing that remains is for us humans to be aware of what is happening and to band together united in dealing with the consequences of our own brilliance.

Sounds like the best Sci-Fi movies we have already seen. Only now it’s an urgent reality.

What steps can you take now?

To address the concerns of Hinton and other AI Visionaries, at GlobalLogic, we have set up a Generative AI (GAI) Centre of Excellence (CoE), drawing together our AI and Machine Learning experts from all over the world, and we are carefully considering the GAI use cases that could be of value to our clients. We differentiate ourselves in that we can guide you on how to best implement GAI technologies in a safe, secure, transparent, controllable, trustworthy, ethical, legally waterproof, and regulatory compliant manner.

Dr Maria Aretoulaki is part of this CoE and recently spoke on the importance of Explainable and responsible Conversational and Generative AI at this year’s European Chatbot & Conversational AI Conference, which you can find here.

Reach out to our experts today to make AI work for you rather than the other way round!

***

About the author:

Dr Maria Aretoulaki has been working in AI and Machine Learning for the past 30 years: NLP, NLU, Speech Recognition, Voice & Conversational Experience Design. Having started in Machine Translation and Text Summarisation using Artificial Neural Networks, she has focused on natural language conversational voicebots and chatbots, mainly for Contact Centre applications for organisations worldwide across all the main verticals.

In 2018, Maria coined the term “Explainable Conversational Experience Design”, which later morphed to “Explainable Conversational AI” and more recently – with the explosion of LLMs and the ChatGPT hype – to “Explainable Generative AI” to advocate for transparent, responsible, design-led AI bot development that keeps the human in the loop and in control.

Maria joined GlobalLogic in 2022 where she is working with the Consumer Solutions & Experiences capability in the UK and the global AI/ML and Web3/Blockchain Practices. In 2023 she was invited to join the GlobalLogic Generative AI Centre of Excellence, where she is helping shape the company’s Responsible Generative AI strategy. She recently contributed to the Hitachi official response to the US Dept of Commerce NTIA proposal on Accountability in AI and regularly contributes to various HITACHI and METHOD Design initiatives.

  • URL copied!