Is AMD inside Microsoft’s new OpenAI supercomputer? Surprise tweet may imply so
Ed: We reached out to Microsoft to confirm who provided the processors but didn't get any answers. AMD has, meanwhile, issued a rather candid tweet that some say alludes to the fact that Epyc, AMD's server processor, powers the new OpenAI supercomputer. As it would make no sense to do a shoutout if it is actually your rival powering such a halo product. The original story follows below.
Microsoft has unveiled a brand new supercomputer designed specifically to train gigantic artificial intelligence (AI) models, the firm announced at its annual Build conference.
Hosted in Microsoft Azure, the system boasts more than 285,000 CPU cores, 10,000 GPUs and 400Gbps of connectivity for each GPU server, placing it among the top five most powerful supercomputers in the world.
The supercomputer was built in partnership with and will be exclusively used by OpenAI, a San Francisco-based firm dedicated to the ethical implementation of AI.
- World's most powerful supercomputer used for coronavirus research
- Hackers turn supercomputers into cryptocurrency mining rigs
- IBM questions Google quantum computing claims
Microsoft supercomputer
According to a Microsoft blog, the design of its supercomputer was informed by new developments in the field of AI research, which suggest large-scale models could unlock a host of new opportunities.
Developers have traditionally designed specific small-scale AI models for each individual task, such as identifying objects or parsing language. However, the new consensus among researchers is that these objectives can be better achieved via a single colossal model, which digests extraordinary volumes of information.
“This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats...or even generating code from scouring GitHub,” reads the post.
The new supercomputer, according to Microsoft, marks a significant step on the road to training these ultra-large models and making them available as a platform for developers to build upon.
“The exciting thing about these models is the breadth of things they’re going to enable,” said Kevin Scott, Microsoft CTO.
“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.”
Beyond its partnership with OpenAI, Microsoft has also developed its own large-scale AI models (referred to as Microsoft Turing Models) under its AI at Scale initiative, designed to improve language understanding across its product suite.
The eventual goal, according to the Redmond giant, is to make these models and its supercomputing resources available to businesses and data scientists via Azure AI services and Github.
- Here's our list of the best cloud computing services on the market
Contributer : Techradar - All the latest technology news https://ift.tt/2LLW3NU
No comments:
Post a Comment