Future of AI

You know who else has a 5-year plan? AI does. AI will achieve AGI in 5 years. And it won’t care about YOUR 5-year plan.

To better understand this trajectory, consider exploring the free course “The Future of AI” by BlueDot Impact. It requires no more than two hours and offers valuable insights into the evolution of AI—where it stood five years ago, where it is today, and why many experts are sounding the alarm about its exponential growth, driven by unprecedented financial investment.There is an urgent need to accelerate the development of AGI governance —not only through legal and political frameworks but also by addressing the profound moral and ethical questions it raises.

Importantly, this conversation must extend beyond the tech sector. Individuals from diverse backgrounds should engage in proposing concrete solutions. And with the rise of intuitive prompting techniques (sometimes referred to as “vibe coding”), technical expertise is no longer a prerequisite for meaningful participation.

A central question remains: Who should control AGI, and how?

As someone who strongly supports moral and political cosmopolitanism, I advocate for a global, ad hoc institution to oversee AGI. However, given the current geopolitical climate—marked by rising nationalism and authoritarianism—a more pragmatic approach might involve a multilateral framework that ensures a balance of AGI-powers. See for instance this argument for multipolarity as better than the alternative, nuclear war, which is closer than ever:

It is in the interest of everyone, including authoritarian regimes, to build strong defensive capabilities against the loss of control over AGI. If you would like to be involved and do more for AI safety, have a look at this online community: https://www.aisafety.com/

I have completed the certification available at this link.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑