A recent article took a detailed looked into the dramatic and rapid sequence of events involving the firing of Sam Altman as the head of OpenAI by its board of directors, the subsequent resignations of much of the top talent, the immediate hiring of them by Microsoft (which was an investor in OpenAI), and the resignation of the Open AI board and the return of Altman and others to the company, all within the space of less that a week.
It is not the corporate maneuvering that I want to write about but the potential and dangers of AI. There has been a great deal written about the new generation of AI software and whether it stays at the level of being large language models that seem to mimic intelligence but still require constant interaction, direction, and supervision by humans or whether they achieve the more advanced level of artificial general intelligence (AGI), that is close to human intelligence and can function much more autonomously, and where that could lead.
Some employees at OpenAI and Microsoft, and elsewhere in the tech world, had expressed worries about A.I. companies moving forward recklessly. Even Ilya Sutskever, OpenAI’s chief scientist and a board member, had spoken publicly about the dangers of an unconstrained A.I. “superintelligence.”
