The AI space just got more interesting. LLaMA Meta's decision to OpenSource their technology for non-commercial use surprised the existing players, who intended to keep their tech closed and proprietary.
NYTimes link: In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels
Current players believe that open source models could lead to uncontrolled LLM, citing safety limits. Although this is concerning, it begs the question: how can controlled systems be trained in a closed-source model?
But calling the AI will someday give birth to twins God & Satan was the highlight.
Onion Article: AI will someday give birth to twins
According to Sam's past interviews, he believes that in the future, only 3 or 4 companies will dominate the LLM, with tier 2 companies using these models to build their systems. If the OS model prevails, his dream will be crushed.
Interestingly OpenAI released a blog today about "Governance of Super Intelligence”.
Given exponential growth of artificial super-intelligence (ASI) and it being the most powerful system created by Humans. "They want to form an international oversight organization for future AI systems"
They want to build coordination among the ASI development efforts. This extends from the development, integration, and growth rate of the ASI systems. While this sets high standards and responsibilities for companies, it also restricts the growth of new players.
Creating an IAEA-like governing body for ASI is a no-brainer. With comparisons to nuclear energy and synthetic biology, it's even more compelling. But here's the catch: should we let a government-like entity control what AI says? It's up for debate.
They want to make artificial superintelligence (ASI) safe. But what does that even mean? Are we in danger now? And how transparent are they about their safety procedures? So many questions...
"We think it's important to allow companies and open-source projects to develop models below a significant capability threshold..." Why control only open-source projects below this capability? Will this also apply to existing companies that are doing this? How open will they be?
Why we need ASI:
+ A better world
+ Improved quality of life
+ Economic growth
+ Tremendous potential
Why it's becoming more feasible:
+ Cheaper to build
+ More actors (open & closed source)
But here's the catch: ASI is risky. We need global surveillance or something similar to govern
Control is mandatory for ASI. I would love to see a world where ASI is free and decentralised but within a boundary that will not put lives in danger, is not risky, is not advantageous to few players.
What do you think about ASI Governance?
Posted originally in My Twitter