“Prosthetic Memory Systems,” Delivered Via Electrode, Could Be Dope, If You’re Willing To Wait A While

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Prosthetic memory systems: no longer just some sci-fi nonsense.

Researchers just completed a military-funded project intended to boost patients’ recall. At first glance, the numbers look really promising. At second glance, though, they might just be enough cause for optimism, but, well, not much more. 

The 15 participants were seeking treatment for epilepsy-related memory loss at North Carolina’s Wake Forest Baptist Medical Center. They had already received surgery to place small brain implants in an effort to map what was going on in their brains to better treat their epilepsy.

In the study, published in the Journal of Neural Engineering on March 28, the participants in the study were asked to complete a simple task: look at an image on a screen and then correctly identify it among three or four other images after a short delay. While they were doing so, the researchers were busy mapping their brain activity to identify the region that displayed the most activity when the participant remembered the correct image.

In a second trial, the researchers used those small electrodes to stimulate the “correct answer” areas they had just identified.

The result? Stimulated participants’ short term memory improved by 37 percent, and their long-term memory (or what the researchers are calling that — a similar task with a longer day) improved by 35 percent.

“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss,” said Robert Hampson, the lead researcher on this project, in a press release.

Dope.

The researchers received funding from DARPA in the hope that their work could help soldiers who face memory loss after head injuries.

Some caveats: this was one clinical trial conducted on just 15 people who were asked to complete one specific, simple task in a hospital setting. It’s not at all clear this would help you stop losing your keys so damn much, nor would you want to necessarily undergo surgery to try it. At least, not at its current stage of development, which is just proof-of-concept. 

The results from this latest memory boosting study, which the researchers are calling a “prosthetic memory system,” are impressive. They might even inspire optimism, if you’re into that sort of thing.  This experiment lays the groundwork for future human research into technology that can restore or enhance brain function, and that’s nothing to dismiss.

But for as long as scientists have studied memory loss, no matter its cause, the timeline for when we’d have a viable solution was always in “the near future,” “sometime down the line.” A stock answer for when Alzheimer’s might be cured is always “50 years away,” conveniently after that scientist would likely have retired.

So what does this study show? A cool, promising future of prosthetic memories. But not for, say, 50 years or so.

The post “Prosthetic Memory Systems,” Delivered Via Electrode, Could Be Dope, If You’re Willing To Wait A While appeared first on Futurism.

Futurism

Cash For Apps: Make money with android app

Apple Patent Application Aims to Put VR Systems in Autonomous Cars

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Apple’s patents might not be a clear indicator of what’s actually going to be a product anytime soon, but it does at least present a clear look at some of Apple’s lofty ideas. Continue reading
iPhone Hacks | #1 iPhone, iPad, iOS Blog
Cash For Apps: Make money with android app

Apple’s Health App can Show Medical Records From 39 Health Systems

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Today’s been a busy day for Apple, with the company launching updates for iOS, tvOS, and watchOS, but it’s not quite done with the news. Continue reading
iPhone Hacks | #1 iPhone, iPad, iOS Blog
Cash For Apps: Make money with android app

FogHorn Systems follows up Google IIoT collaboration with Wind River partnership

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

A new collaboration between FogHorn Systems and Intel-owned Wind River will see the integration of FogHorn’s Lightning edge analytics and ML platform with Wind River’s software products to advance IIoT.

As part of the collaboration, announced a week after a partnership with Google, both parties have agreed to combine the FogHorn platform with Wind River software including Wind River Helix Device Cloud, Wind River Titanium Control, and Wind River Linux to speed up the competitive imperative industrial organisations encounter to harness the power of their IIoT data.

The combined solution was on display at Embedded World 2018, in Nuremberg, Germany, from 27 February to 1 March. 

While FogHorn allows organisations to position data analytics and ML as near as possible to the data source, Wind River offers the technology to support manageability of edge devices throughout their life span, virtualisation for workload consolidation and software portability through containerisation.

The Foghorn platform is claimed to be the planet’s most advanced, compact and feature-rich edge intelligence solution capable of offering superior low latency for onsite data processing, real-time analytics, ML and AI capabilities.

Commenting on the team up, Wind River CPO Michael Krutz said: "Wind River's collaboration with FogHorn will solve two big challenges in Industrial IoT today, getting analytics and machine learning close to the devices generating the data, and managing thousands to hundreds of thousands of endpoints across their product lifecycle. We’re very excited about this integrated solution, and the significant value it will deliver to our joint customers globally."

FogHorn’s partnership with Google Cloud aims to provide business impact expansion of IIoT via the integration of Cloud IoT Core capabilities with its Lightning edge intelligence and ML platform.

iottechnews.com: Latest from the homepage

Cash For Apps: Make money with android app

FogHorn Systems and Google Cloud team up to offer IIoT solution

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

FogHorn Systems and Google Cloud have come together to expand business impact of Industrial IoT (IIoT) applications by combining the capabilities of Cloud IoT Core and FogHorn’s Lightning edge intelligence and ML platform.

This integration leads to the creation of a model foundation for optimising distributed assets and processes in several industries including manufacturing, O&G, mining, connected cars, smart building and smart cities. The partnership also aims to ease the deployment of IIoT applications.

The combined solution will be available at Google Cloud Next, from July 24 to 27, in San Francisco.

Antony Passemard, head of IoT product management at Google Cloud, said: “Cloud IoT Core simply and securely brings the power of Google Cloud’s world-class data infrastructure capabilities to the IIoT market. By combining industry-leading edge intelligence from FogHorn, we’ve created a fully-integrated edge and cloud solution that maximizes the insights gained from every IoT device. We think it’s a very powerful combination at exactly the right time.”

The FogHorn Lightning platform is a compact, advanced and feature-rich edge intelligence solution that can deliver low latency for onsite data processing, real-time analytics, ML and AI capabilities.

David King, CEO at FogHorn, said: “Our integration with Google Cloud harmonises the workload and creates new efficiencies from the edge to the cloud across a range of dimensions. This approach simplifies the rollout of innovative, outcome-based IIoT initiatives to improve organizations’ competitive edge globally, and we are thrilled to bring this collaboration to market with Google Cloud.”

iottechnews.com: Latest from the homepage

Cash For Apps: Make money with android app

Travis Kalanick is joining the real estate startup City Storage Systems as CEO

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Kalanick invested $ 150 million in the company through his 10100 fund.

Former Uber CEO Travis Kalanick has found his new job. The controversial Silicon Valley entrepreneur is joining a startup called City Storage Systems that focuses on repurposing real estate assets..

Kalanick, who will be CEO, invested $ 150 million into the 15-person startup, according to a statement he tweeted on Tuesday.

That initial investment gives Kalanick controling interest in the company. Two of its businesses focuses on buying and repurposing real estate assets in the food and retail space, according to Kalanick. The company will also work with parking and industrial assets.

“There are over $ 10 trillion in these real estate assets that will need to be repurposed for the digital era in the coming years,” he wrote.

The Los Angeles-based limited liability company — a company of the same name was incorporated yesterday in Delaware, according to state records — will acquire those assets and then outfit them for new use cases.

Earlier this month, Kalanick announced the launch of his personal investment fund, called 10100. In June 2017, the former Uber CEO stepped down from his post at the ride-hail company under pressure from major company shareholders.

This is developing …


Recode – All

Cash For Apps: Make money with android app

The red-hot AI hardware space gets even hotter with $56M for a startup called SambaNova Systems

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Another massive financing round for an AI chip company is coming in today, this time for SambaNova Systems — a startup founded by a pair of Stanford professors and a longtime chip company executive — to build out the next generation of hardware to supercharge AI-centric operations.

SambaNova joins an already quite large class of startups looking to attack the problem of making AI operations much more efficient and faster by rethinking the actual substrate where the computations happen. While the GPU has become increasingly popular among developers for its ability to handle the kinds of lightweight mathematics in very speedy fashion necessary for AI operations. Startups like SambaNova look to create a new platform from scratch, all the way down to the hardware, that is optimized exactly for those operations. The hope is that by doing that, it will be able to outclass a GPU in terms of speed, power usage, and even potentially the actual size of the chip. SambaNova today said it has raised a massive $ 56 million series A financing round led by GV, with participation from Redline Capital and Atlantic Bridge Ventures.

SambaNova is the product of technology from Kunle Olukotun and Chris Ré, two professors at Stanford, and led by former SVP of development Rodrigo Liang, who was also a VP at Sun for almost 8 years. When looking at the landscape, the team at SambaNova looked to work their way backwards, first identifying what operations need to happen more efficiently and then figuring out what kind of hardware needs to be in place in order to make that happen. That boils down to a lot of calculations stemming from a field of mathematics called linear algebra done very, very quickly, but it’s something that existing CPUs aren’t exactly tuned to do. And a common criticism from most of the founders in this space is that Nvidia GPUs, while much more powerful than CPUs when it comes to these operations, are still ripe for disruption.

“You’ve got these huge [computational] demands, but you have the slowing down of Moore’s law,” Olukotun said. “The question is, how do you meet these demands while Moore’s law slows. Fundamentally you have to develop computing that’s more efficient. If you look at the current approaches to improve these applications based on multiple big cores or many small, or even FPGA or GPU, we fundamentally don’t think you can get to the efficiencies you need. You need an approach that’s different in the algorithms you use and the underlying hardware that’s also required. You need a combination of the two in order to achieve the performance and flexibility levels you need in order to move forward.”

While a $ 56 million funding round for a series A might sound massive, it’s becoming a pretty standard number for startups looking to attack this space, which has an opportunity to beat massive chipmakers and create a new generation of hardware that will be omnipresent among any device that is built around artificial intelligence — whether that’s a chip sitting on an autonomous vehicle doing rapid image processing to potentially even a server within a healthcare organization training models for complex medical problems. Graphcore, another chip startup, got $ 50 million in funding from Sequoia Capital, while Cerebras Systems also received significant funding from Benchmark Capital.

Olukotun and Liang wouldn’t go into the specifics of the architecture, but they are looking to redo the operational hardware to optimize for the AI-centric frameworks that have become increasingly popular in fields like image and speech recognition. At its core, that involves a lot of rethinking of how interaction with memory occurs and what happens with heat dissipation for the hardware, among other complex problems. Apple, Google with its TPU, and reportedly Amazon have taken an intense interest in this space to design their own hardware that’s optimized for products like Siri or Alexa, which makes sense because dropping that latency to as close to zero as possible with as much accuracy in the end improves the user experience. A great user experience leads to more lock-in for those platforms, and while the larger players may end up making their own hardware, GV’s Dave Munichiello — who is joining the company’s board — says this is basically a validation that everyone else is going to need the technology soon enough.

“Large companies see a need for specialized hardware and infrastructure,” he said. “AI and large-scale data analytics are so essential to providing services the largest companies provide that they’re willing to invest in their own infrastructure, and that tells us more investment is coming. What Amazon and Google and Microsoft and Apple are doing today will be what the rest of the Fortune 100 are investing in in 5 years. I think it just creates a really interesting market and an opportunity to sell a unique product. It just means the market is really large, if you believe in your company’s technical differentiation, you welcome competition.”

There is certainly going to be a lot of competition in this area, and not just from those startups. While SambaNova wants to create a true platform, there are a lot of different interpretations of where it should go — such as whether it should be two separate pieces of hardware that handle either inference or machine training. Intel, too, is betting on an array of products, as well as a technology called Field Programmable Gate Arrays (or FPGA), which would allow for a more modular approach in building hardware specified for AI and are designed to be flexible and change over time. Both Munichiello’s and Olukotun’s arguments are that these require developers who have a special expertise of FPGA, which a sort of niche-within-a-niche that most organizations will probably not have readily available.

Nvidia has been a massive benefactor in the explosion of AI systems, but it clearly exposed a ton of interest in investing in a new breed of silicon. There’s certainly an argument for developer lock-in on Nvidia’s platforms like Cuda. But there are a lot of new frameworks, like TensorFlow, that are creating a layer of abstraction that are increasingly popular with developers. That, too represents an opportunity for both SambaNova and other startups, who can just work to plug into those popular frameworks, Olukotun said. Cerebras Systems CEO Andrew Feldman actually also addressed some of this on stage at the Goldman Sachs Technology and Internet Conference last month.

“Nvidia has spent a long time building an ecosystem around their GPUs, and for the most part, with the combination of TensorFlow, Google has killed most of its value,” Feldman said at the conference. “What TensorFlow does is, it says to researchers and AI professionals, you don’t have to get into the guts of the hardware. You can write at the upper layers and you can write in Python, you can use scripts, you don’t have to worry about what’s happening underneath. Then you can compile it very simply and directly to a CPU, TPU, GPU, to many different hardwares, including ours. If in order to do work you have to be the type of engineer that can do hand-tuned assembly or can live deep in the guts of hardware there will be no adoption… We’ll just take in their TensorFlow, we don’t have to worry about anything else.”

(As an aside, I was once told that Cuda and those other lower-level platforms are really used by AI wonks like Yann LeCun building weird AI stuff in the corners of the Internet.)

There are, also, two big question marks for SambaNova: first, it’s very new, having started in just November while many of these efforts for both startups and larger companies have been years in the making. Munichiello’s answer to this is that the development for those technologies did, indeed, begin a while ago — and that’s not a terrible thing as SambaNova just gets started in the current generation of AI needs. And the second, among some in the valley, is that most of the industry just might not need hardware that’s does these operations in a blazing fast manner. The latter, you might argue, could just be alleviated by the fact that so many of these companies are getting so much funding, with some already reaching close to billion-dollar valuations.

But, in the end, you can now add SambaNova to the list of AI startups that have raised enormous rounds of funding — one that stretches out to include a myriad of companies around the world like Graphcore and Cerebras Systems, as well as a lot of reported activity out of China with companies like Cambricon Technology and Horizon Robotics. This effort does, indeed, require significant investment not only because it’s hardware at its base, but it has to actually convince customers to deploy that hardware and start tapping the platforms it creates, which supporting existing frameworks hopefully alleviates.

“The challenge you see is that the industry, over the last ten years, has underinvested in semiconductor design,” Liang said. “If you look at the innovations at the startup level all the way through big companies, we really haven’t pushed the envelope on semiconductor design. It was very expensive and the returns were not quite as good. Here we are, suddenly you have a need for semiconductor design, and to do low-power design requires a different skillset. If you look at this transition to intelligent software, it’s one of the biggest transitions we’ve seen in this industry in a long time. You’re not accelerating old software, you want to create that platform that’s flexible enough [to optimize these operations] — and you want to think about all the pieces. It’s not just about machine learning.”

Mobile – TechCrunch

Cash For Apps: Make money with android app

Lyft team-up will build self-driving car systems on a large scale

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

If Lyft is going to translate self-driving car experiments into production vehicles offering rides, it's going to need some help — and it's on the way. The company has formed a partnership with Magna that will see the two jointly fund and develop a…
Engadget RSS Feed
Cash For Apps: Make money with android app

Tesla Powerwall systems help some Hawaii schools beat the heat

Tesla shipped Powerwall batteries to Puerto Rico last fall — and to Australia last December — and now it's helping Hawaii. Again. Specifically, it supplied equipment to the island state to help schools combat Hawaii's tropical temperature and relat…
Engadget RSS Feed

Apple updates all of its operating systems to fix app-crashing bug

It took a few days, but Apple already has a fix out for a bug that caused crashes on each of its platforms. The company pushed new versions of iOS, macOS and watchOS to fix the issue, which was caused when someone pasted in or received a single India…
Engadget RSS Feed