Jeffrey Hinton, known as the "Godfather of AI," recently shared thoughts on his students and AI safety. Hinton expressed pride in his students, especially one who fired Sam Altman from OpenAI. He thinks Altman prefers profits over safety. This event happened briefly in November 2023.
Hinton's recent Nobel Prize in Physics surprised many. Some physicists felt annoyed since Hinton is more a computer science expert. His key contribution is the idea of neural networks. These are modeled after the human brain. He always believed they could create intelligence in machines. Many doubted this idea long ago. But Hinton's belief proved right.
Hinton shared his feelings about winning the Nobel Prize. He received the call at 1 AM and was shocked. He never expected a physics award. He credits mentors and students for his success. He emphasized that young researchers should focus on AI safety. He urged governments to push companies to support this research.
Hinton worries about AI becoming smarter than humans. He believes this could happen in the next 20 years. He wants more research to prevent AI from becoming harmful. He thinks governments should regulate AI development closely.
Fake videos and phishing attacks are immediate risks. Phishing attacks increased by 1,200% last year. This rise is due to advancements in language models. They make phishing tricks hard to spot. The language used is perfect now. Hinton believes these are just the start of AI risks.
Hinton's concerns go beyond immediate risks. He thinks superintelligent AI could pose serious threats. This intelligence might surpass the entire human race. He used an example of a super-smart chess AI. It will always win, but no one knows how. He stressed the need for AI alignment. AI goals should not conflict with human goals.
Hinton plans to give his Nobel Prize money to charities. One such charity supports neurodiverse young adults. He hopes AI will boost productivity and better lives. But he insists on the importance of safety in AI development.
Governments should ensure companies allocate resources for AI safety. AI models are improving fast, but safety needs equal attention. If AI becomes unsafe, consequences could be severe. Hinton’s call for action is clear: prioritize AI safety research now.