Who should be responsible for putting AGI into humanoid robots?

0
21

Who should be responsible for putting AGI into humanoid robots?

It can be challenging at times to wrap your head around all the groundbreaking innovations taking place right now. We’ve been living in a time over the last few decades that make it seem almost normal to have several world-changing technologies advancing quickly, but few other times in history has this been the case. If you look back at the “giant leaps” in mankind, they nearly always stand alone: the printing press; gunpowder; the airplane; the computer. These are periods of time where humans wrestled with a new technology to see how it could change our lives, and how the world could advance by discovering its many uses.

We are in a time period where we don’t have just one globally significant technology that is reshaping our daily lives. We have at least three: artificial intelligence, blockchain, and robotics. Quantum computing will be on this list soon enough, but it still has a few major hurdles before the tech can start affecting the average person’s life. We are at such a blistering pace of innovation with AI, blockchain, and robotics that every single day brings new headlines showing what a company or research institute has accomplished, changing our expectations of what we can do with these new tools.

Unlike other periods of time where a single technology occupied our attention, we are at a unique place in history where the leaders in these fields are actively exploring what new achievements are possible by combining the technologies, and the implications are truly staggering. As a case in point, with AI progressing steadily the conversation has turned to the race for AGI (artificial general intelligence), which would be perhaps the single largest leap in human achievement to date. At the same time, companies (and governments) around the world have made significant progress in developing humanoid robots, a true leap in innovation compared to the industrial robotic arms that manage automated assembly lines. It didn’t take long to consider what these technologies might do when combined, with Elon Musk recently stating that Tesla will be among the first to build AGI, and could be first to use it in humanoid robots.

This brings with it a number of reactions, and not all of them good. Musk and his companies have achieved true breakthroughs in innovation, but the term “responsibly” cannot be tied with much of it. Setting aside the conversation about workforce treatment and other more politically charged concerns, just diving into the evolution of Tesla’s automated driving features show a clear pattern of innovation at a pace that leaves a path of destruction (figuratively and literally). It also brings up a bigger question we need to address: who should control that level of power, having AGI or humanoid robots and especially having both? The answer could lie in that third major technology: blockchain, and specifically, its ability to decentralize across borders and corporations. One of blockchain’s major goals has been to decentralize not just digital currency, but power itself, and organizations like the ASI Alliance have been serious about building an infrastructure that gives access to everyone instead of a few powerful governments or megacorporations holding the key.

What’s the threat? Are AGI robots even possible?

This question is complicated in some ways, but less relevant than you may realize. Looking first at humanoid robots, just the last two years have seen dramatic improvements in developing humanoid robots that run on efficient batteries, and have demonstrated robots effortlessly performing gymnastics, running around, completing tasks, and coordinating with others. The improvements on the robots themselves are very real, and just two years has shown that humanoid robots will become part of our lives, and that a number of different countries and major companies have this technology.

What is less obvious is how well these robots can interact with the environment around them. Yes, there are impressive displays of Tesla’s Optimus, the Boston Dynamics Atlas, and various Chinese robots at the 2026 Chinese New Year Spring Festival Gala. These feats are incredible, but also meticulously scripted. What is less obvious is how well these robots can use AI to be flexible in their tasks and decision making. With AGI on the horizon, as many industry leaders claim, implementing humanoid robots with this level of intelligence would be an obvious solution for interaction with an unscripted real-world environment. It’s hard to say where these two technologies will meet, and watching funny videos of robots falling over or making terrible decisions can lower expectations. However, if you look back over just the last decade, see where we are today, and project forward, it is clear that both technologies will evolve to science fiction levels sometime in the near future. It might be a few years, it might be 10-15 years. But that time frame is now imminent, not some future generation’s problem.

The big question: Who should hold the power?

Now that we’ve established that humanoid robots with AGI are inevitable at some point in the near future, the real issue is deciding who should be in control of that much combined power. This is where science fiction can actually provide surprising insight. We’ve seen how AI can improve our daily lives, and we are also seeing how AI can be used by bad actors to take advantage of us. Sadly, “bad actors” aren’t just some shady hacker organization. Given the ongoing investigations taking place around the world, some of the worst actors who are manipulating people, spying on them, and using AI against them are the corporations and even governments. If we take that sort of behavior and combine it with the significant innovation leaps of AGI, that concern grows very quickly. But if we then take that concern and imagine fast, powerful, and precise humanoid robots with AGI, we can see exactly what this might look like thanks to the countless dystopic science fiction stories that play out that scenario. This is a real concern, and it breaks down into simple facts: if you give a few organizations (corporations, governments, or anyone else) power that far exceeds the average person, that power will almost inevitably be used against the average person. The only way to prevent this is not through laws, military force, or even diplomacy, but by distributing the power across a population.

As mentioned above, the ASI Alliance (and perhaps others like it in the near future) is focused on distributing the knowledge and benefits of AGI. This not only gives access to the average person, but also utilizes the genius of a global community instead of a siloed organization. With robotics evolving in different areas around the globe, having a decentralized AGI removes the threat of asymmetrical power considerably, preventing a single tech company or government from behaving in ways we have seen before.

Featured image credit