OpenAI introduces Superalignment
! OpenAI commits 20% of compute to tackle superintelligence alignment! New Superalignment team led by top AI researchers Ilya Sutskever & Jan Leike
Superintelligence offers solutions to global issues, but carries risks like human disempowerment & extinction
Team's goal: Build a human-level automated alignment researcher for scaling efforts & aligning superintelligence
OpenAI to share research, enhancing safety of current models & mitigating AI risks
Why it matters: OpenAI's substantial commitment highlights their recognition of AI risks & proactive approach to ensuring a safer AI future.
Read more....