Pure Storage, NVIDIA launch enterprise AI supercomputer in a box

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Pure Storage and NVIDIA have launched “AI in a box” for enterprise customers. Chris Middleton talks to Pure Storage CTO Alex McMullan about the strategy behind the team-up.

Flash storage provider, Pure Storage, and hardware giant NVIDIA have announced what they say is a state-of-the-art AI supercomputer ready to be slotted into a customer data centre.

AIRI, which the companies describe as “the industry’s first comprehensive, AI-ready infrastructure”, is designed to help organisations deploy artificial intelligence at scale, and speed time to insight.

The new converged-infrastructure appliance is essentially “AI in a box”, and is intended to provide an architecture that “empowers organisations with the data-centric infrastructure needed to harness the true power of AI”, according to a joint announcement from the companies.

What’s in the box?

The integrated hardware/software solution includes Pure Storage FlashBlade, a storage platform architected for analytics and AI, and four NVIDIA DGX-1 supercomputers, delivering “four petaflops of performance” via NVIDIA Tesla V100 GPUs.

The systems are interconnected with Arista 100GbE switches, supporting GPUDirect RDMA for maximum distributed performance. AIRI is also supported by the NVIDIA GPU Cloud deep-learning stack and Pure Storage’s new AIRI Scaling Toolkit.

All of this high-performance, optimised hardware will enable data scientists to “jumpstart their AI initiatives in hours, rather than weeks or months”, said the announcement.

As some sections of the media zero in on the perceived problems and ethical challenges associated with AI, Pure Storage stressed the social benefits of the technology.

“AI has fantastic potential for aiding humanity,” said Charles Giancarlo, CEO of Pure Storage. “It has the capacity to significantly improve the quality of all of our lives. AIRI will accelerate AI research, enabling innovators to more rapidly make advances to create a better world with data.”

That’s all very well, but how much does all this cost? On that point, Pure Storage and NVIDIA remained tight-lipped and pointed to their channel partners, but the specification of the hardware suggests this may be for enterprises with deep pockets. Perhaps AIRI can tell us.

The CTO speaks out

Pure Storage CTO Alex McMullan told Internet of Business that while AIRI is an “industry first”, the focus is on making AI “accessible to just about everyone”.

“If you can actually drop an AI supercomputer into a customer data centre in a couple of hours, then that’s a huge time-to-market benefit,” he said.

“We’ve worked with NVIDIA to make a high-end, state-of-the-art supercomputer available in 24 inches of data centre infrastructure, replacing what would otherwise be racks and racks of stuff.”

What was the main driver behind the idea? “This is a very data-driven world, with big data sets and data footprints,” explained McMullan. “And we have a whole separate thread here at Pure Storage about data gravity, and why that’s a challenge and concern for the industry.

“AI is really about data quality, it’s about data provenance, and it’s about having the right level of training data that is correctly tagged and indexed.

“But some of our existing customers who are doing machine learning have 200 or 300 people who do nothing but categorise and tag images and other data, because that’s what’s required in this space. They told us, ‘We spend a lot of time with wires and cables and boxes trying to plug all this stuff together, so wouldn’t it be great if…’

“That’s what started the conversation with ourselves and NVIDIA. We had a number of joint customers, but we thought it would make more sense to have a single offering.”

So does AIRI (AI-Ready Infrastructure) itself include AI software, or is the appliance optimised for other vendors’ solutions?

“It brings it up to a specific level where all the tool sets, libraries, and models that a data scientist would expect are installed on the platform. But if you have your own data sets and models you can certainly apply them,” said McMullan.

The public cloud problem

So what would the advantage be to an organisation of implementing an on-premise, appliance-based solution – as opposed to deploying something like Watson in the cloud or any other solution?

“For me the answer is it’s all about [the problem of] the public cloud,” said McMullan. “The public cloud has challenges with data gravity. For me, the public cloud is there to deliver agility and time to market, but it’s not there to deliver scale and cost efficiencies.

“What we see quite often is that many of our existing customers experiment, integrate, and develop in the public cloud at small scale, but once they have larger deployments and data sets, once they have bigger clusters, it always come back on premise, because that’s the most cost-effective way of deploying it.

For me something like Watson is very much a start, a small-scale, early-adopter technology, whereas the NVIDIA/Pure solution is very much an industrial scale behemoth.”

AIRI: “An industrial-scale behemoth”.

Customers speak out

AIRI is launching with three named customer partners onboard: outsourced call centre provider Global Response, AI business applications provider Element AI, and AI pathology specialist, Paige.AI.

Paige.AI aims to transform clinical diagnoses and oncology via the use of artificial intelligence. “With access to one of the world’s largest tumour pathology archives, we needed the most advanced deep learning infrastructure available to quickly turn massive amounts of data into clinically validated AI applications,” said Dr. Thomas Fuchs, founder and chief science officer of Paige.AI.

Meanwhile, Element AI, a platform for companies to build their own AI solutions, sees AIRI as an “accelerant” for complex projects. “AIRI represents an exciting breakthrough for AI adoption in the enterprise, shattering the barrier of infrastructure complexities and clearing the path to jumpstart any organisation’s AI initiative,” said Jeremy Barnes, chief architect at Element AI.

Finally, Global Response has begun development of a call centre system that allows for the real-time transcription and analysis of customer support calls. “We’ve reached an inflection point where integration of AI throughout our organisation is critical to the ongoing success of our business,” said Stephen Shooster, Co-CEO, Global Response.

“While we wanted to move quickly, the infrastructure for AI was slowing us down, because it is very complex to deploy. To truly operationalise AI at scale, we needed to build a simple foundation powerful enough to support the entire organisation.”

Internet of Business says

McMullan’s own background is in financial services technology, with stints at UBS and Barclays, along with Sun Microsystems and British Aerospace. However, he said that despite sectors such as financial services being in the vanguard of AI adoption, AI is really just about, “shovelling large amounts of data into a GPU engine”.

“The cardinality and the structure of that data doesn’t really matter. The chief thing is it’s best at finding trends, patterns, and outliers,” he said.

However, one thing that is increasingly important to AI adopters – and to legislators and regulators – is the question of AI’s transparency and ‘auditability’. Does an AI in a box make ‘showing the workings’ of AI easier, or harder?

“In terms of transparency, I don’t think this changes the equation,” said McMullan. “It’s still using the same native software tools, it’s simply allowing you to get to a result considerably faster and more reliably.

“However, I think that the combination of Pure Storage and NVIDIA allows you to have a much larger training set, which to me is the key foundation of any kind of machine-learning-based approach. The better and bigger your training set, the better the results you’re going to get.

“But the workings inside the box are still a software output, and I don’t think we’re there yet in terms of understanding the complete result based on the input.”

So now we know.

Some of our recent AI-related reports:-

Read more: Affectiva launches emotion tracking AI for connected car drivers

Read more: Mayor of London launches project to make capital epicentre of AI

Read more: HPE aims new tech portfolio at enterprise AI deployments

Read more: Group protects rainforest with recycled phones, machine learning

Read more: AI regulation & ethics: How to build more human-focused AI

Read more: IBM launches new Watson Assistant AI for connected enterprises

Read more: Analysis: Oracle says autonomy now, AI with everything by 2020

 

The post Pure Storage, NVIDIA launch enterprise AI supercomputer in a box appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

Japan’s latest supercomputer is dedicated to nuclear fusion

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

This year, Japan will deploy a Cray XC50 that will be the world's most powerful supercomputer in the field of advanced nuclear fusion research. It will be installed at the National Institutes for Quantum and Radiological Science (QST) and used for lo…
Engadget RSS Feed
Cash For Apps: Make money with android app

How the United States Plans to Reclaim Its Supercomputer Dominance

Race to the Top

For a long while now, there has been a not-so-subtle competition between the United States and China that extends to pretty much everything that both nations do, from solar manufacturing to waste processing. More recently, that race has now come to include scientific research and technological development.

China seems to have overtaken the U.S. in the latter. From research in artificial intelligence to building a quantum network, and now housing the world’s most powerful supercomputers, China now enjoys the number one spot. In fact, its top two supercomputers far outperforms all of the 21 supercomputers in the U.S. operated by the Department of Energy (DOE).

U.S. researchers, however, are keen on reclaiming the top of the league table, and the latest machine in their pipeline could be the key. At the Oak Ridge National Laboratory in Tennessee, experts are building Summit, the supercomputer that’s said to replace the most powerful machine in the U.S. today. It’s set to be finished some time this 2018.

This isn’t the only one in the works, though. At the Argonne National Laboratory in Lemont, Illinois, scientists are planning to build a supercomputer that’s even faster than Summit. Dubbed as the A21, this machine could churn some 1,000 peta floating-point operations per second (petaflop), or 1018 flops, which is the estimated capacity of a human brain. Summit’s theoretical maximum performance is around 200 petaflops.

Getting Their Bearings Straight

With those numbers, both machines would far exceed China’s Sunway TaihuLight — currently the world’s most powerful computer — which is capable of 93 petaflops. The A21 is supposed to be built by 2021, two years ahead of schedule, with the help of Intel and Cray. Scientists are scheduled to meet this week in Knoxville, Tennessee, to examine the first detailed designs for the supercomputer.

This seems like a solid plan to reclaim the top spot, although Science notes that China and maybe even Japan are more likely to launch an exascale (1,000 petaflop) computer first, with the former set to unveil one called the Tianhe-3 by 2020, following their five-year plan.

At the very least, working on Summit and A21 would keep the U.S. from being left in the dust by China’s achievements in supercomputer development. For decades, the U.S. has been the undisputed leader in the field. China snatched the title only in 2013, and have since maintained it. Until now, that is.

The post How the United States Plans to Reclaim Its Supercomputer Dominance appeared first on Futurism.

Futurism

Turn your brain into a supercomputer with an improved vocabulary and reading speed — and take an extra 15% off


Look to take your communication skills to the next level with this double-barreled pairing of self-help exercises, the vocabulary-building Vocab 1 and the reading comprehension boosting 7 Speed Reading EX. Both software programs are on sale together right now — with an extra 15 percent discount — down to just $ 24.65 from TNW Deals.
The Next Web

The Next Most Powerful Supercomputer in the U.S. Is Almost Complete

Computing for Science

Inside one of the rooms of the Oak Ridge National Laboratory (ORNL) at Tennessee, the next fastest and most powerful supercomputer in the U.S. is getting ready to solve some of science’s biggest questions. Meet the Summit supercomputer, which is designed to be the predecessor to ORNL’s current high-performance supercomputer called Titan.

Supercomputers have been around for some time now. The high-performance computing these devices are capable of make them ideal for running larger and more complex computational problems, such as those that deal with questions of science. For Oak Ridge Leadership Computing Facility project director Buddy Bland, these type of big problems would make a good test for Summit’s capabilities.

“We will try to run some big problems,” he told the Knoxville News Sentinel, referring to the period of “early science” Summit will be busy with as soon as it becomes operational in 2018. The remaining parts of Summit will come in February, Bland said, “Then we would expect the machine to be built up and accepted sometime next summer.”

Once built, the science problems Summit will tackle include testing and developing stronger, lighter manufacturing materials; the use of sound waves to model the inside of the Earth; and other astrophysics projects that explore the universe’s origins.

“For instance, we’ll be looking into why supernovae explode,” Bland said. “When stars explode, they create all of the elements we find in the universe, everything that’s part of you and me and part of this planet gets created.” The ORNL team expects Summit to be available for open research by 2019.

Radically More Powerful Supercomputer

Currently, the world’s fastest and most powerful supercomputer is in China. The Sunway TaihuLight is capable of 93 peta floating-point operations per second (FLOPS) of computing power, with a theoretical peak of 125 petaFLOPS. A FLOP measures computing speed, and a petaFLOP is equal to one thousand million million (1015) FLOPS. Today’s regular computers, in contrast, can only perform about 63GFLOPS, although Intel has developed a chip that can give your desktop computer some teraFLOPS of power.

Meet The Most Powerful Computers in the World
Click to View Full Infographic

Summit, on the other hand, would be capable of 200 petaFLOPS, which potentially outperforms the Sunway TaihuLight. It’s able to do this thanks to GPU-accelerated computing, which means it uses a central processing unit — like regular computers do — but is coupled with graphical processing units (GPUs) to perform computations. GPUs create realistic visuals in video games, but they also help scientists when they use supercomputers to understand physical phenomenon.

“For the first time, we were able to do a simulation of a star exploding in three dimensions and we found that it wasn’t symmetric all the way around,” Bland said, referring to an exploding star simulation ORNL ran on the Titan. “When a star starts collapsing in on itself and then explodes out, you’ve got these rolling and tumbling things.”

In short, the Summit will be the fastest supercomputer around by 2018. But ORNL doesn’t want to stop there. By 2021, they plan on developing the world’s first exascale computer, capable of processing information at one exaFLOP or a billion billion operations per second. That’s 50 times faster than the fastest supercomputer. Still, as powerful as these supercomputers will be, scientists are also looking forward to a universal quantum computer.

The post The Next Most Powerful Supercomputer in the U.S. Is Almost Complete appeared first on Futurism.

Futurism

Nvidia says its new supercomputer will enable the highest level of automated driving

Nvidia, one of the world’s best known manufacturers of computer graphics cards, announced a new, more powerful computing platform for use in autonomous vehicles. The company claims its new system, codenamed Pegasus, can be used to power Level 5, fully driverless cars without steering wheels, pedals, or mirrors.

The new iteration of the GPU maker’s Drive PX platform will deliver over 320 trillion operations per second, which amounts to more than 10 times its predecessor’s processing power. Pegasus will be marketed to the hundreds of automakers and tech companies that are currently developing self-driving cars starting the second half of 2018, the company says.

Nvidia’s…

Continue reading…

The Verge – All Posts

SpaceX, NASA, and HP Are Sending a Supercomputer to the ISS

The Importance of 30 Minutes

The International Space Station is nearly twenty years old. During almost two decades in low-Earth orbit, the floating laboratory has offered the opportunity to test many a hypothesis in microgravity.

Often, these experiments have to do with biology and biochemistry. Take for instance studying the effects of space radiation on mammalian reproduction, or flatworm regeneration in microgravity. However, hardware also has a place in the lab.

The current computers on the ISS – the ones that operate the station – run on a microprocessor first introduced in 1985. That may not sound like enough to power the almost five-mile-long station however, these computers are supported by 24/7 monitoring from the ground by even more powerful computers.

Image Credit: NASA

The system does the job, for now. It doesn’t take long for information to travel from the ISS to the ground. However, when humans eventually get to the Red Planet, communicating between Mars and Earth will result in a bit of a delay. No, not quite a la The Martian. More like 30 minutes each way.

This may not sound like much, but, as Alain Andreoli, Hewlett-Packard Enterprise’s (HPE) senior vice president of its data center infrastructure group explained in a blog post:

A long communication lag would make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they’re not able to solve themselves.

Essentially, half an hour could cost someone their life.

Hardened Software

So why aren’t scientists just sending better computers to space? Well, space travel is pretty rough on technology, and NASA has high demands. Computers aboard the ISS need to withstand space-related problems such as “radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, irregular cooling,” explained Andreoli. This “hardening” process results in additional costs and unnecessary bulk.

Image Credit: NASA

What if traditional, off-the-shelf computer components could be made to withstand the rigors of space? NASA and HPE are working together to find out. Monday, a SpaceX rocket will launch a supercomputer called the Spaceborne Computer to the ISS for a year-long experiment (coincidently, the amount of time it would take humans to get to Mars).

The computer has not been hardened for the radiation environment on the space station in the traditional sense. Instead, it’s been “software hardened.” The goal is to better understand how space will degrade the performance of an off-the-shelf computer. Meanwhile, back on Earth, an identical model will run in a lab as a control.

The computer is only about the size of two pizza boxes stuck together. It has a special water-cooled enclosure as well as custom software that can automatically adjust for environmentally-induced computer errors. It may not be the most powerful computer on the market, but with its 1 teraflop computing speed, it’ll be the most powerful computer ever sent into space.

If this experiment works, it opens up a universe of possibility for high performance computing in space.

“This goes along with the space station’s mission to facilitate exploration beyond low Earth orbit,” Mark Fernandez, HPE’s leading payload engineer for the project, told Ars Technica. “If this experiment works, it opens up a universe of possibility for high performance computing in space.”

Not only will this result in better computers aboard the ISS and other NASA crafts that can send humans farther into space, but it will also help with experiments on the ISS. Fernandez explains that scientists could use an on-board supercomputer for data processing, rather than sending the data back to Earth.

The post SpaceX, NASA, and HP Are Sending a Supercomputer to the ISS appeared first on Futurism.

Futurism

SpaceX and HP Enterprise to send supercomputer to ISS next week


Next week, a SpaceX CRS-12 rocket will launch from Kennedy Space Center in Cape Canaveral. Its payload will include a HP Enterprise (HPE) supercomputer, called the Spaceborne Computer, which will be used to see if off-the-shelf computer components can be built to withstand the harsh conditions of space. Space travel is notoriously brutal to tech. It often shortens the lifespan of hardy Thinkbooks to mere months, forcing NASA to send a regular supply of laptops to the International Space Station, alongside pouches of ready-to-eat chilli con carne and freeze dried ice cream. Consequently, most heavy computing is done on terra…

This story continues at The Next Web
The Next Web

This Tiny Supercomputer Consumes 98% Less Power and 99.93% Less Space

1 petaFLOPS, 1 Rack

This week, AMD unveiled Project 47, a supercomputer that crams a whopping 1 petaFLOPS of computing performance into a single server rack. This means Project 47 is as powerful as IBM’s $ 100 million Roadrunner — the world’s most powerful supercomputer in 2007 — which required 2,350,000 watts of electricity, 6,000 square feet of floor space, and 296 racks. In contrast, Project 47 consumes 98 percent less power and 99.93 percent less space, requiring just a single rack.

The IBM Roadrunner cluster was primarily composed of approximately 12,960 PowerXCell processors and 6,912 Opteron CPUs. Project 47 comprises 80 Radeon Instinct GPUs, 20 AMD EPYC 7601 processors, and 20 Mellanox 100G cards, and it includes 10TB of Samsung memory. AMD says it would take 33.3 MW of power and 1,000 Project 47 racks to scale Project 47 up to 1 exaFLOPS.

A Step Forward

Project 47 is part of a wider movement to reduce the footprint of supercomputers, and each stride forward means improved efficiency and less energy used to get the same amount of — or a lot more — computing power. Increasing computing power will be critical for the management of more sophisticated systems, such as those that house artificial intelligences (AI) in safe, productive ways.

The system is built around the 2U parallel computing platform Inventec P47. The P47 is designed for machine intelligence and graphics virtualization applications. Project 47’s 1 PetaFLOP was achieved using a single Inventec P47 systems rack. It requires only 33.3 kW for a petaFLOPS of computational power thanks to its 30 gigaFLOPS per watt energy efficiency — making it 25 percent more efficient than competing supercomputing platforms, according to AMD.

AMD claims the Project 47 rack beats any other comparably configured system in terms of compute units, cores/threads, memory channels, and I/O lanes in simultaneous use. The system should be on sale later this year, although AMD has yet to release the price.

The post This Tiny Supercomputer Consumes 98% Less Power and 99.93% Less Space appeared first on Futurism.

Futurism