Apple’s famed co-founder and engineering whizz, Steve Wozniak, has signed a Future of Life Institute open letter that urges major AI institutions to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, reads a section of the letter.
The letter has notably been signed by other industry heavyweights in addition to politicians and active AI researchers. The list of signees include Elon Musk, Andrew Yang, Stuart Russell, and Yoshua Bengio. The letter has currently been signed by less than 1200 people.
The letter argues that AI systems present fundamental risks to humanity as a whole, and should be “managed with commensurate care and resources.” The letter suggests that the proclaimed level of care does not currently exist, as billions of dollars get poured into building nascent AI systems that “no one can understand, predict, or reliably control.”
Discover new horizons, always connected with eSIM
Travel the world stress and hassle-free with the best eSIM service available. Enjoy unlimited data, 5G speeds, and global coverage for affordable prices with Holafly. And, enjoy an exclusive 5% discount.
The letter urges stricter controls over development of AI systems, stating that they “should only be developed only once we are confident that their effect will be positive and their risks will be manageable”. Until this confidence is earned, the letter calls for a public pause on all AI training work for at least six months, even urging governments to step in if necessary.
The temporary cease should be used to develop safety nets and protocols that ensure stringent oversight of AI development going forward. The letter emphasizes that AI research should be diverted to create more “interpretable, transparent” systems.
Governments are encouraged to work with AI developers to “accelerate development of robust AI governance systems”, adds the letter. A set of AI laws should also go into effect – including “watermarking systems to help distinguish real from synthetic”, and a liability mechanism for harm caused by AI. Further, the letter mandates ample institutional research to analyze and circumvent major AI-driven socioeconomic disruptions
The letter concludes that “humanity can enjoy a flourishing future with AI” if humanity collectively measure and mitigate any potential societal risks attributed to AI.