Snapchat’s iPhone X-exclusive Lenses look more realistic than usual

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Snapchat has discovered a way to leverage the power of iPhone X's TrueDepth camera — and that means you'll have access to exclusive Lenses if you use Apple's all-screen mobile device. Starting today, you'll see TrueDepth-enabled Lenses appear period…
Engadget RSS Feed
Cash For Apps: Make money with android app

This Game Developer Uses iPhone X to Create Realistic Facial Animations

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

The iPhone X’s advanced TrueDepth Camera may have wider applications than just Face ID and Animoji — the flagship is now apparently being used in game development. Finnish game studio Next Games is reportedly using the iPhone X for a new location-based augmented reality title called The Walking Dead: Our World. Specifically, the studio has […]
Read More…
iDrop News
Cash For Apps: Make money with android app

Gunpowder Moon is a chillingly realistic book about the fight to control the Solar System

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

On September 12th, 1962, President John F. Kennedy spoke at Rice University in Houston, Texas, about the need for America to land on the Moon “before the decade is out.” The space race was never primarily about exploration or science, but rather capturing the highest ground there was in the chilly nuclear conflict with the Soviet Union. In his new novel, Gunpowder Moon, David Pedreira envisions an inhabited Moon and its role in a larger geopolitical fight for planetary dominance.

The Moon has long captured the attention of science fiction writers, either imagining its inhabitants in the midst of a political experiment — as in Robert Heinlein’s The Moon is a Harsh Mistress or John Kessel’s fantastic The Moon and the Other — or as a sort…

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

Mattel’s ‘Jurassic World’ dino-bots are surprisingly realistic

Mattel's last Kamigami STEM robot was an adorable DIY lady bug. Now, the toy company is aiming for something bigger with its new Jurassic World bots. You'll still have to put them together first, but what you end up with is a complex robo-dino with r…
Engadget RSS Feed

Elon Musk Wants to Meld the Human Brain With Computers. Here’s a Realistic Timeline.

Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?

Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.

How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?

How do brain-computer interfaces work and what can they do?

Origins: Rehabilitation and restoration

Eb Fetz, a researcher here at the Center for Sensorimotor Neural Engineering (CSNE), is one of the earliest pioneers to connect machines to minds. In 1969, before there were even personal computers, he showed that monkeys can amplify their brain signals to control a needle that moved on a dial.

Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.

Similarly, some limited virtual sensations can be sent back to the brain, by delivering electrical current inside the brain or to the brain surface.

What about our main senses of sight and sound? Very early versions of bionic eyes for people with severe vision impairment have been deployed commercially, and improved versions are undergoing human trials right now. Cochlear implants, on the other hand, have become one of the most successful and most prevalent bionic implants — over 300,000 users around the world use the implants to hear.

Graph showing how a bidirectional BCI interacts with an external controller.
A bidirectional brain-computer interface (BBCI) can both record signals from the brain and send information back to the brain through stimulation. Image Credit: Center for Sensorimotor Neural Engineering (CSNE), CC BY-ND

The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.

With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.

Still early days

Not all BCIs, however, are invasive. Noninvasive BCIs that don’t require surgery do exist; they are typically based on electrical (EEG) recordings from the scalp and have been used to demonstrate control of cursorswheelchairsrobotic armsdroneshumanoid robots and even brain-to-brain communication.But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information, but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted — a prospect most people today wouldn’t consider.

The first demonstration of a noninvasive brain-controlled humanoid robot “avatar” named Morpheus in the Neural Systems Laboratory at the University of Washington in 2006. This noninvasive BCI infers what object the robot should pick and where to bring it based on the brain’s reflexive response when an image of the desired object or location is flashed.

But all these demos have been in the laboratory — where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.

Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?

An electrocorticography grid.
An electrocorticography grid, used for detecting electrical changes on the surface of the brain, is being tested for electrical characteristics. Image Credit: Center for Sensorimotor Neural Engineering, CC BY-ND

Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.

There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.

Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials — the wires that connect to brain tissue — tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.

Co-adapting, cohabiting

Despite all these challenges, we’re optimistic about our bionic future. BCIs don’t have to be perfect. The brain is amazingly adaptive and capable of learning to use BCIs in a manner similar to how we learn new skills like driving a car or using a touchscreen interface. Similarly, the brain can learn to interpret new types of sensory information even when it’s delivered noninvasively using, for example, magnetic pulses.

Learning to interpret and use artificial sensory information delivered via noninvasive brain stimulation.

Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.

We are similarly excited about recent successes in targeted treatment of diseases like diabetes using “electroceuticals” — experimental small implants that treat a disease without drugs by communicating commands directly to internal organs.

And researchers have discovered new ways of overcoming the electrical-to-biochemical language barrier. Injectible “neural lace,” for example, may prove to be a promising way to gradually allow neurons to grow alongside implanted electrodes rather than rejecting them. Flexible nanowire-based probesflexible neuron scaffolds and glassy carbon interfaces may also allow biological and technological computers to happily coexist in our bodies in the future.

From assistive to augmentative

Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities — possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago — and now share our roads.

A graph showing how a BCI can interface.
A BCI can vary along multiple dimensions: whether it interfaces with the peripheral nervous system (a nerve) or the central nervous system (the brain), whether it is invasive or noninvasive and whether it helps restore lost function or enhances capabilities. Image Credit: James Wu; adapted from Sakurambo, CC BY-SA

Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.

The post Elon Musk Wants to Meld the Human Brain With Computers. Here’s a Realistic Timeline. appeared first on Futurism.


Scintillating Realistic Skiing Simulator ‘Just Ski’ Is Free for the First Time on the App Store

Jeff Weber’s Just Ski [Free] is easily one of the most interesting games I’ve played on iOS over the past year, which may sound surprising for anyone who has not given it a go yet. While its hyper-minimalist aesthetic design and skiing mechanics we’ve already seen perfected in games like Alto’s Adventure [$ 4.99] may make Just Ski not seem particularly distinctive, the developer’s emphasis on realistic physics created what others may call the Dark Souls of the skiing game genre. Joking aside, such a lazy descriptor detracts from the fact that Just Ski’s nuanced mechanics were immensely satisfying to nail down, and success or failure was entirely down to the player. For the first time, Just Ski is now available for free for a limited time only, which serves as a perfect excuse to get back onto the pistes and promptly crash to your inevitable alpine death.

I don’t want to start throwing around sweeping simplifications declaring that Just Ski is a hard game, or an inaccessible game – while it may be easier to pick up Alto’s Adventure and go on an epic run first time, Just Ski can be conquered through extra care on positioning and considering for its physics. However, it does have a bit of a learning curve, and the developer released a video shortly after the game’s release that helps give any budding skiers the skills they need to survive. If you do adapt to its unique mechanics, Just Ski is the definition of catharsis. If you don’t, well, all you’ve lost is a few minutes of your time considering the fact Just Ski is free for a limited time. Before the price reverts back to its usual $ 0.99 next week, download Just Ski for absolutely nothing on the App Store today, and let us know your thoughts on our forum thread.


Oculus tweaks VR audio to seem closer and more realistic

Audio can be just as important as visuals if you want to make true-to-life VR experiences. That's why Oculus has introduced changes to the Rift SDK that give developers the power to make spatial audio as realistic as possible. One of the features the…
Engadget RSS Feed

Virtual Pets, Zombies, and Realistic Food Shown in New ARKit Demos

Anticipation is mounting as the public launch of iOS 11 draws nearer by the day. Of course, Apple’s next-generation software for iPhone and iPad has been circulating the beta tester circuits for months now, allowing testers and, more importantly, developers, to get acquainted with and create wonderful things for iOS 11 users. And by all accounts, the software is poised to usher in a substantial update for iPhone (and particularly iPad) users, offering tons of new capabilities and features for compatible iOS devices.

One of those amazing advancements will be the emergence and growth of Augmented Reality (AR)-based applications within the iOS ecosystem; several of which we’ve had the pleasure of witnessing for ourselves over the last few months. Well, the official iOS 11 isn’t even out yet, and again we have another round of ARKit creations to feast our eyes on.

For those who need a quick refresher: ARKit is Apple’s latest developer kit announced back at WWDC 2017 in June, and will allow devs to create a wide range of AR-based apps for iPhones and iPads. As we’ve seen before, these apps can have a wide range of implications and will work with everything from games to utilities to shopping, navigation, and so much more.

In the first of four demo videos, we meet Kabaq: a company who wants to revolutionize the future of ordering food through its ARKit app. In the demo, Kabaq demonstrates its ‘virtual food on a plate’ mechanism, which appears to be geared towards changing the way we might order food from a restaurant, for example. The video shows customers in a restaurant literally being able to arrange various foods on their plate and get a 360 degree view of what the menu offerings look like. It’s an interesting application, and could possibly have implications for the future of casual or up-scale dining.

In the second demo video, a team of developers is shown promoting their work on the upcoming ARZombi app, which will essentially offer players an ultra-realistic (yet unequivocally AR-based) experience fighting off zombies.. And whether you’re into that kind of stuff or not, the game looks like heaps of fun, you guys. Could you imagine your actual living room ‘invaded’ by virtual zombies? We’ll hedge our bets and say you’d rather not, but just in case you were curious — enjoy!

The third video shows a unique virtual pet game that’s currently trending as a campaign on Kickstarter. This app is perhaps the most interesting of what we’ve found today; and if it ever gets off the ground running, the concept could potentially pave the way to a “Tamagotchi-style” category of ARKit apps. In traditional Tamagotchi-style games, players are able to bring a virtual pet or creature into their ‘physical’ worlds and interact with them on various levels. So the concept sounds interesting, and would undoubtedly prove to be a hit among kids if it ever comes to fruition.

In today’s fourth and final demo video, ARKit is seen being used in a mixed art/educational setting; the app’s developer, Fabian Rasheed, can be seen painting and demoing a sculpture, for example, using an Apple Pencil on the iPad Pro. It’s a fascinating app concept, dubbed MakerStudio, which Rasheed says is designed for the purpose of allowing users to create 3D objects and then paint them using AR components in their imminent world.

Want a FREE iPhone 7? Click here to enter our monthly contest for a chance!
Follow us on Apple News by pressing the (+) button at the top of our channel

iDrop News

Hypersonic aircraft are more realistic thanks to a ceramic coating

There are a few reasons why you aren't flying across the country in hypersonic aircraft, but the simplest of them is heat: when you travel at speeds over Mach 5, the ultra-high temperatures (around 3,600F to 5,400F) strip layers from metal. How do y…
Engadget RSS Feed

GE CEO Says Automation Within the Next 5 Years is Not Realistic

Stunted Revolution

Most executives in tech believe that the next five years will bring about a significant number of jobs lost to automation. As advances in robotics and artificial intelligence (AI) are being rapidly developed, the capability of machines to do work previously requiring humans is ramping up. However, not all executives subscribe to this idea of the ultra-fast progression of automation.

Outgoing Chief Executive of General Electric, Jeff Immelt did not mince words regarding his feelings about the impending automation take over. Speaking at the Viva Teach conference in Paris, Immelt said, “I think this notion that we are all going to be in a room full of robots in five years … and that everything is going to be automated, it’s just BS. It’s not the way the world is going to work.”

Immelt believes that tech executives who have no experience running or working in a factory have no idea of how they actually operate and therefore cannot realistically gauge how automation will progress.

Jeff Immelt. Image credit: Gage Skidmore/Flickr
Jeff Immelt. Image credit: Gage Skidmore/Flickr

Human/Tech Integration

Other experts like tech giant Elon Musk and Greg Creed, the CEO of Yum Brands (the people behind Pizza Hut, KFC, and Taco Bell) believe in the near threat of automation to many human jobs. Elon Musk goes even further in saying that humans need to integrate with machines in order to remain relevant in the future.

The problem with looking at automation as something in the far off future is that it limits the necessary conversations of what we can do to prepare workers for job losses. One of the more popular solutions to this automation issue is a Universal Basic Income (UBI) that is supported by the likes of Musk, Mark Zuckerberg, and other experts.

Both sides of this issue are interpreting evidence into predictions. These predictions can only be discounted or vindicated by time. Even so, the questions of what we can do to prepare are still vital whether automation is 5 or 50 years away.

The post GE CEO Says Automation Within the Next 5 Years is Not Realistic appeared first on Futurism.