- bizapprentice
- Posts
- 🤖 As long as AI aligns with humanity, we are fine...
🤖 As long as AI aligns with humanity, we are fine...

Good Morning! It’s Friday, and the weekend is here and the first day of Summer☀️Also some interesting news from the perspective of this newsletter: I got an advertiser! I did not expect that yet, but always welcome, of course! Honestly, I will make sure that the advertisement will be largely in the newsletter, so do not expect some AliExpress products to be advertised here!
In the news today:
🤖 Board director that removed Sam from the OpenAI board for a weekend, starts new company
🌳 US is not happy with the deforestation plans of the EU
🕰️ ICYMI, with highlights of major business topics and some Dutch pride 🇳🇱
AI
A new company to align AI with the existence of Humanity

Ilya’s tweet announcing his new company
What Happened
Do you remember all the drama at OpenAI earlier this year? Sam Altman was ousted by several board members and later returned when all employees threatened to walk away together with Sam to Microsoft. All that drama was sparked by one guy, Ilya Sutskever. The main reason why he voted for removing Altman from the board was safety. But as said, Altman returned, and in the end, Ilya left OpenAI. But now he is back! He is starting his own AI company, Safe Superintelligence (SSI) Inc.
Why It Matters
Ilya Sutskever is a renowned AI researcher who has made significant contributions to the field, including his work on deep learning and neural networks. His decision to leave OpenAI and form SSI underscores the growing concern among experts about the need for a greater focus on AI safety.
Safe superintelligence refers to the development of AI systems that are not only highly capable but also aligned with human values and interests. As AI technology becomes more advanced and powerful, ensuring that these systems are safe and beneficial to humanity is becoming increasingly critical.
The launch of SSI highlights the growing concerns within the AI community about the potential risks associated with the development of super-intelligent AI systems. As AI technology advances rapidly, ensuring the safety and alignment of these systems with human values has become a top priority for many researchers and entrepreneurs.
What's Next
The progress of SSI will be watched carefully because other AI projects started by former OpenAI employees, like Anthropic, have been very successful. The race to hire the best talent and make big advances in safe superintelligence is expected to heat up. Right now, salaries in the AI industry are going up quickly, but the main debate might end up being about different opinions on AI safety.