MIT’s wearable device can ‘hear’ the words you say in your head

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

If you've read any sort of science fiction, it's likely you've heard about subvocalization, the practice of silently saying words in your head. It's common when we read (though it does slow you down), but it's only recently begun to be used as a way…
Engadget RSS Feed
Cash For Apps: Make money with android app

Mind-reader: MIT’s AlterEgo wearable knows what you’re about to say

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

mit alterEgo wearable predicts speech

A wearable being developed at MIT’s Media Lab knows what its wearer is going to say before any sound is made.

The AlterEgo device uses electrodes to pick up neuromuscular signals in the jaw and face that are triggered by internal verbalisations – all before a single word has been spoken, claim MIT’s researchers.

Every one of us has an internal monologue of sorts, a place where our most intimate thoughts come and go as they please. Now, thanks to sophisticated sensors and the power of machine learning, the act of saying words in your head might not be so private after all.

MIT believes that the simple act of concentrating on a particular vocalisation is enough to engage the system and receive a response, and it has developed an experimental prototype that appears to prove it.

To ensure that the conversation remains internal, the device includes a pair of bone-conduction headphones. Instead of sending sound directly into the ear, these transmit vibrations through the bones of the face to the inner ear, conveying information back to the user without interrupting the normal auditory experience.

Read more: Apple hires Google AI chief to head machine learning | Analysis

The benefits of silent speech

Arnav Kapur, the graduate student who is leading development of the new system at MIT’s Media Lab, wants to augment human cognition with more subtlety than today’s devices allow for. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways, and that feels like an internal extension of our own cognition?” he said.

Kapur’s thesis advisor, Professor Pattie Maes, points out that our current relationship with technology – particularly smartphones – is disruptive in the negative sense. These devices demand our attention and often distract us from real-world conversations, our own thoughts, and other things that should demand greater attention, such as road safety.

“We basically can’t live without our cellphones, our digital devices,” she said. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”

The challenge is to find a way to alter that relationship without sacrificing the many benefits of portable technology.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” she said.

Read more: MITs CSAIL lab studies aquatic life with robot fish

The potential of AlterEgo

Instead of being a precursor to some kind of Orwellian dystopia, the MIT team believes that the technology, once perfected, could improve the relationship between people and the devices they use, as well as serving a variety of practical functions.

So far the device has been able to surreptitiously give users information on the time and solve mathematical problems. It’s also been given wearers the power to win chess games, silently receiving opponents’ moves and offering computer-recommended responses, claims MIT.

The team is still collecting data and training the system. “We’re in the middle of collecting data, and the results look nice,” Kapur said. “I think we’ll achieve full conversation some day.”

The platform could one day provide a way for people to communicate silently in environments where noise is a concern, from runway operators to special forces soldiers. And it could perhaps even open up a world of verbal communication for people who have been disabled by illness or accident.

Read more: Health IoT: New wearable can diagnose stomach problems

Internet of Business says

The rise of voice search in the US – where 20 percent of all searches are now voice-triggered, according to Google – together with the rapid spread of digital assistants, such as Siri, Alexa, Cortana, Google Assistant, and IBM’s new Watson Assistant, has shifted computing away from GUIs, screens, and keyboards. And, of course, smartphones and tablets have moved computers off the desktop and out of the office, too.

However, while voice is the most intuitive channel of human communication, it isn’t suitable for navigating through, and selecting from, large amounts of visual data, for example, which is why technophiles are always drawn back to their screens.

This new interface will excite many, and may have a range of extraordinary and promising applications. But doubtless it will alarm many others as the rise of AI forces us to grapple with concepts such as privacy, liability, and responsibility.

 

And let’s hope, too, that this technology doesn’t always translate what’s on human beings’ minds into real-world action or spoken words, as the world could become a bizarre place indeed.

In the meantime, transhumanists will see this as yet another example of the gradual integration of technology with biology – and with good reason. But whether these innovations will encourage us to become more human, and less focused on our devices, is a different matter; arguably, such devices may train human beings to think and behave in more machine-like ways to avoid disorderly thinking.

Meanwhile, thoughts that can be hacked? Don’t bet against it.

Read more: AI regulation & ethics: How to build more human-focused AI

Read more: Fetch launches world’s first autonomous AI smart ledger

The post Mind-reader: MIT’s AlterEgo wearable knows what you’re about to say appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

MIT’s CSAIL lab studies aquatic life with robot fish

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a solution to a problem faced by marine biologists around the world.

Getting a closer look at ocean life can be a challenge. Conventional methods require boats, divers, and camera rigs. Together, these tend to disturb both sea creatures and their sensitive habitats, such as coral reefs.

The observer effect also applies: the creatures’ behaviour changes as a result of them being watched.

The solution is obvious: blend in, which is why MIT has developed a robot fish, SoFi, which moves just like a real one.

Read more: Robot swans to measure water quality in Singapore

SoFi is made of silicon rubber. It has an undulating tail and can control its own buoyancy, swim in a straight line, turn and dive up or down, all controlled via a waterproof Super Nintendo controller.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” writes CSAIL PhD candidate Robert Katzschmann, lead author of a new article about the project published in Science Robotics.

“We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

Exploring coral reefs without disturbing them

Swimming untethered has been a challenge for robots until now. In part, this is because using standard radio frequencies to communicate underwater is practically impossible. Instead, the SoFi system uses acoustic signals that allow divers to take control using a modified Nintendo remote from up to 70 feet away.

SoFi has had successful test dives at Fiji’s Rainbow Reef, where the robot managed depths of more than 50 feet for 40 minutes at a time. The robot fish was able to record high-res photos and videos using – appropriately enough – a fisheye lens.

“The authors show a number of technical achievements in fabrication, powering, and water resistance that allow the robot to move underwater without a tether,” says Cecilia Laschi, a professor of biorobotics at the Sant’Anna School of Advanced Studies in Pisa, Italy.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef, and because it can be better accepted by the marine species.”

Read more: CSAIL team pairs robots with VR for smart manufacturing

Looking ahead

Katzschmann has said that plans are already in the pipeline to improve SoFi. For example, the team wants to increase the fish’s speed by improving its pump system and improving the overall design.

They also want to add tracking algorithms to allow SoFi to follow real fish automatically using its onboard camera.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” says Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

Internet of Business says

With the media’s coverage of robotics tending to focus on humanoid, industrial, transport, or aerial drone applications, marine robots are often overlooked, but in fact are a major area of development worldwide. For example, robots that move on or below the ocean waves play an important role in environmental, climate, or disaster monitoring, and have applications in offshore installation maintenance too.

The post MIT’s CSAIL lab studies aquatic life with robot fish appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

MIT’s robot carpenters will saw wood for you, but you have to make the furniture yourself

Researchers from MIT have created a new system of robot-assisted carpentry that they say could make the creation of custom furniture and fittings safer, easier, and cheaper.

The system is made up of two parts: design software and semi-autonomous robots. Users select a template from the software (like a chair, table, or shed) and then adjust it to their liking, tweaking the size and shape. This order is then turned into instructions for the robots, which autonomously pick up and saw the necessary materials to the correct size. And it’s then up to the user to put the finished product together.

At the moment, the whole process is pretty basic, and involves a lot of human oversight and instruction. There are only four design templates to…

Continue reading…

The Verge – All Posts

MIT’s robotic carpenters take the hassle out of custom furniture

If you want to build custom furniture, you usually need to know your way around a saw and devote days to both designing it and cutting every last piece. MIT's CSAIL might have a better solution: let computers and robots do the hard work. Its research…
Engadget RSS Feed

MIT’s Veil service will make private browsing more private

After reports and studies revealed that browsers' private modes aren't that secure, MIT graduate student Frank Wang decided to take things into his own hands. He and his team from MIT CSAIL and Harvard have created a tool called Veil, which you could…
Engadget RSS Feed

MIT’s new chip could bring neural nets to battery-powered gadgets

The Best Guide To Selling Your Old Phones With High Profit

 MIT researchers have developed a chip designed to speed up the hard work of running neural networks, while also reducing the power consumed when doing so dramatically – by up to 95 percent, in fact. The basic concept involves simplifying the chip design so that shuttling of data between different processors on the same chip is taken out of the equation. The big advantage of this new… Read More
Mobile – TechCrunch

MIT’s low power encryption chip could make IoT devices more secure

The Internet of Things hasn't ever been super secure. Hacked smart devices have been blamed for web blackouts, broken internet, spam and phishing attempts and, of course, the coming smart-thing apocalypse. One of the reasons that we haven't seen the…
Engadget RSS Feed

MIT’s NanoMap vision helps drones to see complexity at speed

csail nanomap allows drones to fly with uncertainty

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a sophisticated computer vision system for flying robots.

NanoMap allows drones to navigate through dense environments at 20 miles per hour.

Drones’ abilities are taking off

Today’s commercial drones far exceed the capabilities of their predecessors. But if they are to take on more complex or commonplace roles in the workplace, they need to get much smarter and safer.

The vast majority of drones deployed in construction, media, or agriculture applications have some form of computer vision. At the very least they can sense obstacles directly in front of them and avoid collisions.

Some, like DJI’s latest model and those enhanced with Intel’s RealSense technology, can detect obstacles in multiple directions and plot a path around them.

However, CSAIL’s NanoMap system aims to take that awareness to the next level.

As outlined in a new research paper, NanoMap integrates sensing more deeply with control. It works from the starting point that any drone’s position in the real world is uncertain over time.

The new system allows a drone to model and account for that uncertainty when planning its movements – as this video reveals.

Navigating around warehouses to check stock levels or move items from one place to another is just one example of the kind of dynamic environments where drones will need to operate safely.

This ability will be vital in helping drones’ commercial applications to spread.

Read more: CSAIL team pairs robots with VR for smart manufacturing

SLAM dunk scenarios

Developing drones that can build a picture of the world around them and react to shifting environments is a challenge. This is particularly true when computational power tends to be proportional to weight.

Simultaneous localisation and mapping (SLAM) technology is a common way for drones to build a detailed picture of their location from raw data. However, this technique is unreliable at high speed, which makes it unsuitable for tight spaces, or environments where objects are being moved, or the layout is dynamic.

“Overly confident maps won’t help you if you want drones that can operate at higher speeds around humans,” said graduate student Pete Florence, lead author on a related paper.

“An approach that is better aware of uncertainty gets us a much higher level of reliability in terms of being able to fly in close quarters and avoid obstacles.”

Read more: Pyeongchang Winter Olympics to be defended by drone-catching drones

NanoMap works with uncertainty

Using NanoMap, a drone can build a picture of its surroundings by stitching together a series of measurements via depth-sensing. Not only can the drone plan for what it sees already, but it can also plan how to move around areas that it can’t see yet, based on what it has seen already.

“It’s like saving all of the images you’ve seen of the world as a big tape in your head,” explains Florence. “For the drone to plan its motions, it essentially goes back into time to think individually of all the different places that it was in.”

NanoMap operates under an assumption that humans are familiar with: if you know roughly where something is and how large it is, you don’t need much more detail if your only aim is to avoid crashing into it.

By accounting for uncertainty in its measurements, the NanoMap system has reduced the team’s crash rate to just two percent.

“The key difference to previous work is that the researchers created a map consisting of a set of images with their position uncertain, rather than just a set of images with their positions and orientation,” says Sebastian Scherer, a systems scientist at Carnegie Mellon University’s Robotics Institute.

“Keeping track of this uncertainty has the advantage of allowing the use of previous images, even if the robot doesn’t know exactly where it is. This allows for improved planning.”

Internet of Business says

As drones spread into more and more vertical applications, such as farming, manufacturing, critical infrastructure maintenance, building, environmental monitoring, security, law enforcement, broadcasting, autonomous cargo, deliveries, and even public transport, their safety around human beings, and in complex environments, becomes ever more important to demonstrate.

Light-touch regulation is a good idea, but public safety must remain paramount.

Over time, the regulatory environment will roll back to accommodate drones as safety improves. But until then, it will remain cautious and conservative – except in remote areas, such as over the sea at offshore wind farms or oil rigs.

MIT should be congratulated for this latest innovation in drone safety, but progress remains incremental.

The core lesson is this: a two per cent crash rate is impressive, but it’s still unacceptable. In enterprise software or cloud services, no one would accept 98 per cent reliability; so it’s certainly not acceptable with industrial machinery in public spaces.

Battery operated, rotary wing, autonomous vehicles have multiple points of failure. In smart cities, factories, or other public spaces, a single catastrophic incident could set back the industry for years. It is incumbent on all of us to ensure that no one is harmed.

 

The post MIT’s NanoMap vision helps drones to see complexity at speed appeared first on Internet of Business.

Internet of Business

MIT’s ColorFab can 3D print jewelry that changes colors

3D printing can already turn your amazing ideas into tangible objects, but a new technique out of MIT CSAIL could lead to even better results. The method, called ColorFab, gives you the ability to create objects that can change colors after you print…
Engadget RSS Feed