Oppo R15 and R15 Dream Mirror Edition go official with notched screens, new cameras

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Oppo R15 and R15 Dream Mirror Edition are now official. Just as expected, the phones pack notched tall screens, glass bodies, and dual-camera setups. Both R15 and R15 DME have notched 6.28″ OLED screens of 1,080 x 2,280 px resolution, which means no Plus version at least for now. They also share the same design – glass back with a polished metal frame. Both Oppo R15 and R15 DME come with the latest ColorOS 5.0 based on Android 8.1. It has a new “AI” assistant that learns your usage patterns, and fast 0.8s Face Unlock. Oppo R15 The Oppo R15 runs on the new Helio P60 chipset…

GSMArena.com – Latest articles

Cash For Apps: Make money with android app

‘Mirror Land’ Review – I’m Starting with the ‘Mon in the Mirror

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

I’ve experienced a few games now from Magic Cube, and while they’ve always sounded interesting on paper, they never quite seem to come together like you would hope. Their latest, Mirror Land [$ 0.99], is a monster-collecting RPG built around a cute idea. The game uses your actual location and allows you to summon monsters based on that information. The farther you are from where you first started up the game, the more interesting and unusual creatures you’ll find. It’s obviously inspired by Pokemon GO, but it works a little differently in practice. The game attached to that cool idea, on the other hand, is simultaneously the best I’ve played from this developer so far and yet still quite flawed.

Mirror Land tries to spin some kind of story around a world parallel to our own where people battle using special creatures called Fingermons that apparently come from our real world. Or something like that. There are evil clones, the hero has amnesia and can’t remember their surely uneventful past, and rivals a-plenty. The plot is rolled out in conversations that occur before and after completing story missions, events that become so spaced out over time that it was hard to remember how things connected. Not that it really matters. This isn’t the sort of thing you’re going to play for its story.

No, you’re probably here to collect monsters and do battle, and I can happily tell you that there’s plenty of both things going on in this game. The only way to earn monsters is through summoning, but there are a few ways to go about that. The splashy location-based summoning lets you call a monster using either easily-obtained coins or somewhat harder-to-come-by cubes. You can only summon once in any given location, at least for a certain period of time. It seemed to refresh after around a day. Your initial location when you start the game seems to be set as the home lab in-game, and monsters you summon near that lab will almost always be disappointing and mostly the of the same type. Head a little farther afield and you might find something interesting, though.

Most of your monsters will probably come from the standard summoning, however, which doesn’t involve your location at all. Instead, you choose which of the three types of summon you want to perform, pay the required amount of coins, and collect a random Fingermon whose rarity more or less corresponds with how much you spent. Basically, you can summon a pretty weak bronze character after every couple of battles, hang in there for a bit and get a more useful silver character, or wait until Lucifer dons a knitted sweater and nab a powerful gold character. Mirror Land contains no IAPs whatsoever, so the only way to fill out your collection of monsters is to grind those coins.

And grind you will, my friends. There are a lot of things I like about Mirror Land. It’s generally well-crafted, the summoning gimmick is neat, and the battle system is actually pretty fun and somewhat original. Battles play out sort of like the Active Time Battle system in many of the Final Fantasy games. Each character has to wait for a meter to fill up before they can do anything. Once the meter is full, you tap on them to use their action. If you tap just as the meter fills up, you’ll get a bonus. You can also tap just before an enemy strikes to block some of the damage at the cost of a bit of your meter. If you tap before the meter fills, the character will not only fail to act, the meter will also slightly drain. It’s lively, somewhat strategic, and surprisingly tense at times.

Unfortunately, it’s also the entire game. You’ll enter battles from a map screen, where you are presented with a random assortment of missions with different conditions. In this one, you can only use bronze monsters. Over here, you’ll have to fight one-on-one. Most of the time, you’ll have a few types to pick from, though they’ll occasionally collapse into a single mission that you’ll have to either pass or fail to get a new spread. The battles earn experience for participating Fingermons, experience for your hero, and either coins or cubes. Fingermons can only level as high as the hero’s level, and your regulars will usually cap out well before your hero gains their next level. Presumably the idea is to keep the players using an assortment of creatures.

The story is advanced by completing special story missions. These appear after you fulfill a certain set of requirements, like fighting a specific number of battles or summoning a particular amount of Fingermons. You can then tackle the story mission and should you win, you’ll be given the next set of requirements. Things move quickly enough in the beginning, but the further in you go, the more time-consuming it gets to satisfy those requirements and the more difficult the story missions themselves become. Some requirements ask you to summon using the location mechanic, so if you can’t go wandering around in the real world, you won’t be able to move forward. Mirror Land eventually devolves into a game where you grind battles to satisfy the requirements to make the story mission pop, then grind more battles to get to a high enough level to be able to beat that mission. Rinse and repeat.

Contributing to the problem is that the fundamentals of combat don’t change as the game unfolds. The same things that you do near the beginning are what you’ll be doing hours down the road. Watch carefully for each character’s turn to come up, try to tap them at the right moment, and hope your team is strong enough to outlast the other. It’s fine when you only need to do a few battles in a row, but when the game starts asking you to do twenty battles or try to level up to match enemies that are fifteen to twenty levels higher than you in order to proceed, it starts to feel like a genuine grind. A grind that demands your attention, mind you, so you can’t get away with doing it while you watch a movie or something.

The characters also feel like they’re not balanced very well. Faster Fingermons are definitely more useful than the rest, as any attack will drain some of a character’s meter and slow their ability to take their own turns. That goes both ways, so using slower Fingermons becomes a formula for frustration as you watch your meter slowly climb only to get knocked down with each swift strike. I picked up a speedy silver character early on who could punch well above her weight level-wise, while a higher-ranked character who was strong but slow tended to get beaten up by far weaker foes.

Besides leveling up through battling, you can also enhance your creatures by spending coins on their special abilities or cubes on ranking them up. Higher ranked characters tend to be a lot more powerful than their low-ranked cousins, though you’ll still want to keep an assortment of characters of each rank around for specific battles. Characters who are silver-ranked or higher have access to special abilities like drain strikes, attack buffs, and so on. They’ll use them automatically, but you can spend your coins to improve their effectiveness. Fingermons come in different types, with the usual system of each type having one type they’re strong against and one that they’re weak to. This doesn’t usually matter that much, but it can make a big difference in tough battles.

It’s not hard to imagine a world parallel to our own where Mirror Land was a free-to-play game with a stamina meter and premium-currency summons. I applaud the developer for not taking that particular route, but I honestly feel like the game works a lot like one of those kinds of affairs. You can run as many battles in a row as you want to here, but I wouldn’t recommend doing too many at once lest the game start to feel tiresome. You can dump your coins into a ton of weaker summons, but you’re probably best to wait until you can afford a shot at the best ones. Progress is slow and tedious, battle strategies don’t change a whole lot over time, and the story feels like a bunch of weak connecting tissue to provide some kind of context for rolling the same situations over and over again. I liked this game in the moment, but the longer I played it, the less I wanted to.

Mirror Land is better than I was expecting given this developer’s track record, and it can be quite enjoyable at times. But for a game that doesn’t try to extract any additional money from the player beyond the initial asking price, it sure feels structured like a game that does. The stop-and-start pacing and tremendous amounts of repetitive grinding required detract greatly from a game that could otherwise be pretty solid. There’s a decent game here, and it’s really only in the incidentals that it doesn’t shake out to be more than that.

TouchArcade

Cash For Apps: Make money with android app

Nothing says Happy Valentine’s Day like a ‘Black Mirror’ dating app

So, it's Valentine's Day, and what better time to check on the potential end date of your romantic relationship? It's easy to do over at coach.dating, a fun little web app based on the dating AI, Coach, that manages dating relationships in the Black…
Engadget RSS Feed

Mirror Image Compounds Could Help Drugs Last Longer

Changing the (Pep)Tide

Researchers from the University of Toronto have discovered a way to trick the body’s natural defenses into allowing medicines in the bloodstream without an injection.

Drugs used to treat diseases such as diabetes and osteoporosis, which currently can only be delivered through an injection because the body would disintegrate them before they move from the stomach into the blood, could soon be offered as a simple pill.

The team, which detailed their findings in the journal Proceedings of the National Academy of Sciences, created “mirror-image molecules” of existing medications.

 

Image Credit: Michael Garton, University of Toronto
Image Credit: Michael Garton, University of Toronto

 

Philip Kim, professor of computer science and molecular genetics in the University of Toronto’s Donnelly Centre for Cellular and Biomolecular Research, explained his work in a press release: “Mirror image peptides are not recognized and degraded by enzymes in the stomach or bloodstream and therefore have a long-lasting effect.”

Longer Lasting

Peptides are molecules made of two or more amino acids. Naturally, a chain of amino acids is arranged in what is called left-handed, or “L” configuration. The human body is equipped to defend itself from these naturally occurring amino acids, even when the compounds are beneficial to us. To allow drugs to slip past the body’s defense, the researchers flipped this configuration, creating molecules arranged in the opposite way. This shape is known as dextrorotary, or “D”.

The scientists then matched the molecules with a series of drug compounds and tested their ability to do the work of the original drugs. When tested in the lab, mirror image drugs worked similarly to their natural counterparts, but also had longer lasting effects. The researchers are now looking into whether the medications would have the same effect when orally administered to patients. According to Kim, “for frequently dosed medication, this is of great interest, as taking a pill is much easier than having an injection. This could lead to many more peptide drugs being taken as pills.”

Kim and his team will now take the discovery to its next step by first patenting the tech and then seeking to work with the pharmaceutical industry to monetize it. Their work will also expand to other peptides including some that could help in the fight against the Dengue and Zika viruses.

The post Mirror Image Compounds Could Help Drugs Last Longer appeared first on Futurism.

Futurism

Amazon’s Electric Dreams is more optimistic about the future than Black Mirror

These days, it’s almost impossible to talk about any kind of science-fiction TV anthology without comparing it to Charlie Brooker’s future-fears series Black Mirror. It’s the question most SF fans and telephiles will immediately ask. The new Amazon Prime Video anthology Philip K. Dick’s Electric Dreams does have some comparison points to Brooker’s series, and it’s unlikely that either Amazon or its UK television partner, Channel 4, mind having their fledgling series mentioned alongside Netflix’s well-established, buzzy technological creepshow. But Electric Dreams is decidedly brighter than Black Mirror. Co-creators Ronald D. Moore and Michael Dinner are every bit as pessimistic as Brooker about how technology is going to transform the…

Continue reading…

The Verge – All Posts

Kohler wants you to talk to your toilet and mirror which is not at all weird

Kohler, a maker of kitchen and bath products, seems to believe that you should never have to touch their products if at all possible. They’re coming to CES with a new voice and motion-control platform so that you can get Alexa in your bathroom mirror, operate your kitchen faucet with gestures, and talk to your toilet. Not kidding!

Kohler Konnect (that’s right, connect with a “K”) will connect the company’s hardware to Amazon Alexa, Google Assistant, and Apple HomeKit, enabling consumers to interact with Kohler’s products through voice commands, gestures, and presets.

Read More

Kohler wants you to talk to your toilet and mirror which is not at all weird was written by the awesome team at Android Police.

Android Police – Android News, Apps, Games, Phones, Tablets

In season 11, The X-Files is slowly moving closer to Black Mirror

Technology-based horror is nothing new for The X-Files, which had FBI agents Fox Mulder and Dana Scully confront their first murderous AI back in the 1993 episode “Ghost in the Machine.” But technological threats have always had the same status as other monster-of-the week problems on the show, which used to give futuristic fears the same weight as the series’ multiple episodes about killer fungus.

That’s changed since The X-Files returned for its 10th season in 2015. In that season’s premiere episode, Mulder (David Duchovny) learned that the alien invasion he feared for decades isn’t happening, and may never have actually been planned. The revelation helped cut ties with the series’ original nine-year run, and the extremely convoluted…

Continue reading…

The Verge – All Posts

Amazon patents a mirror that dresses you in virtual clothes

Have you ever fretted over buying a suit or dress online for a wedding or another flashy event, wondering how it would look on your frame or if it would even fit? That might not be a problem soon now that Amazon has patented a blended-reality mirror that lets you try on clothes virtually while placing you into a virtual location (via GeekWire).

The patent describes the mirror as partially-reflective and partially-transmissive, and uses a mix of displays, cameras, and projectors to create the blended image. The imagined mirror works by scanning the environment to generate a virtual model, and then identifies the face and eyes of the user to determine which objects are to be seen as a reflection. Once this process is completed, the virtual…

Continue reading…

The Verge – All Posts

Dark Future: Here’s When We’ll Have the Autonomous Guard Dogs from Black Mirror

This article is part of a series about season four of Black Mirror, in which Futurism considers the technology pivotal to each episode and evaluates how close we are to having it. Please note that this article contains mild spoilers. Season four of Black Mirror is now available on Netflix.

The Headless Guard Dog

Three people prepare for their mission to break into a seemingly abandoned warehouse. They had made a promise to help someone who was dying, to make his final days easier. They seem nervous and a little frantic, like they were undertaking this task out of sheer desperation.

Within a few minutes, we find out what they’re afraid of — and as the episode continues, we understand why the characters were so worried. It’s a four-legged, solar-powered robot-dog. It looks eerily similar to the latest iteration of Boston Dynamics’ SpotMini. Like the SpotMini, “the dog,” as it’s called in the “Metalhead” episode of the latest season of Black Mirror, doesn’t have a head. Instead, it has a front piece encased in glass that houses its many sensors, including a sophisticated computer vision system (we see this as the screen flips periodically to the dog’s view of the world).

Unlike the SpotMini, however, the metalhead dog comes with a whole bunch of advanced weaponry — a grenade that launches shrapnel-like tracking devices into the flesh of prospective thieves or assailants, for example. And in its front legs, the dog is armed with guns powerful enough to pop a person’s head off. It can also connect to computer systems, which allows it to conduct more high-tech tasks like unlocking security doors and driving a smart vehicle.

The metalhead dog is no regular guard dog. It’s lethal and relentless, able to hunt down and destroy anyone who crosses it. Potential robbers, like the characters at the beginning of the episode, would be wise to stay away, no matter how promising the payload of a break-in.

Like a lot of the technology in Black Mirror, the dog isn’t so far-fetched. Countries like the United States and Russia are keen on developing weapons powered by artificial intelligence (AI); companies like Boston Dynamics are actively developing robo-dogs to suit those needs, among others.

But how close are we to having the AI-enhanced security dog like the one in “Metalhead”?

Beware of (robo)Dog

According to experts, some of the features in Black Mirror’s robotic dog are alarmingly close to reality. In November, a video about futuristic “slaughterbots” — autonomous drones that are designed to search out specific human targets and kill them — went viral. The comments section reflects people’s discomfort with a future filled with increasingly facile ways to kill people.

Mercifully, the technology was fictional, as was the video. But that may not be the case for long, Stuart Russell, a professor of computer science at the University of California, Berkeley who was part of the team that worked on the video, tells Futurism. “The basic technologies are all in place. It’s not a harder task than autonomous driving; so it’s mainly a matter of investment and effort,” Russell said. “With a crash project and unlimited resources [like the Manhattan Project had], something like the slaughterbots could be fielded in less than two years.”

Metalhead. Image credit: Netflix

Louis Rosenberg, the CEO and founder of Unanimous AI, a company that creates AI algorithms that can “think together” in a swarm, agrees with Russel’s assertion that fully autonomous robotic security drones could soon be a regular part of our lives. “It’s very close,” Rosenberg told Futurism. “[T]wenty years ago I estimated that fully autonomous robotic security drones would happen by 2048. Today, I have to say it will happen much sooner.” He expects that autonomous weapons like these could be mass produced between 2020 and 2025.

But while the “search and destroy” AI features may be alarmingly close, Black Mirror‘s metalhead dog is still some ways off, Russell noted.

The problem with creating this robo-killer, it seems, goes back to the dog’s ability to move seamlessly through a number of different environments. “The dog functions successfully for extended periods in the physical world, which includes a lot of unexpected events. Current software is easily confused and then gets ‘stuck’ because it has no idea what’s going on,” Russell said.

It’s not just software problems that stand in the way. “Robots with arms and legs still have some difficulties with dextrous manipulation of unfamiliar objects,” Russell said. The dog, in contrast, is able to wield a kitchen knife with some finesse.

And the dog is not so easy to outsmart, unlike today’s robots.  “Robots are still easily fooled, of course — they currently would be unable to cope with previously unknown countermeasures, say, a tripwire that is too thin for the [LIDAR] to detect properly, or some jamming device that messes up navigation using false signals,” Russell said.

Please Curb Your (robo)Dog

In the end, the consensus seems to be that, in the future, we could bring such robo-dogs to life. But should we?

Both Rosenberg and Russell agree that the weaponization of AI, particularly as security or “killer-robots,” will bring the world more harm than good. “I sincerely hope it never happens. I believe autonomous weapons are inherently dangerous — [they leave] complex moral decisions to algorithms devoid of human judgement,” Rosenberg explained. The autocorrect algorithms on most smartphones makes errors often enough, he continued, and an autonomous weapon would probably still make errors. “I believe we are [a] long way from making such a technology foolproof,” Rosenberg said.

Granted, most AIs today are pretty sophisticated. But this doesn’t mean they are ready to make life-or-death decisions.

One big hurdle: the central inner-workings of most algorithms are incomprehensible to us. “Right now, a big problem with deep learning is the ‘black box’ aspect of the technology, which prevents us from really understanding why these types of algorithms take certain decisions,” Pierre Barreau, the CEO of Aiva Technologies, which created an artificial intelligence that composes music, told Futurism via email. “Thus, there is a safety problem when applying these technologies to take sensitive decisions in the field of security because we may not know exactly how the AI will react to every type of situation, and if its intentions will be the same as ours.”

That seeming arbitrariness with which AIs make enormous, important decisions concerns critics of autonomous weapons, such as Amnesty International. “We believe that fully autonomous weapons systems would not be able to comply with international human rights law and international policing standards,” Rasha Abdul-Rahim, an arms control advisor for Amnesty, told Futurism via email.

Humans aren’t perfect in making these decisions, either, but at least we can show our mental work and understand how someone reached a particular decision. That’s not the case if, say, a robo-cop is deciding whether or not to use a taser on someone. “If used for policing, we’d have to agree that machines can decide on the application of force against humans,” Russell said. “I suspect there will be a lot of resistance to this.”

In the future, global governing bodies might prohibit or discourage the use of autonomous robotic weapons — at least, according to Unanimous AI’s swarm, which has successfully predicted a number of decisions in the past.

However, others claim that there may be situations in which countries might be justified in using autonomous weapons, so long as they are heavily regulated and the technology does as it’s intended. Russell pointed out that a number of global leaders, including Henry Kissinger, propose a ban on autonomous weapons designed to directly attack people, but still allow their use in aerial combat and submarine warfare.

“Therefore there must always be effective and meaningful human control over what the [International Committee of the Red Cross] has termed as their ‘critical functions’ — meaning the identification of targets and the deployment of force,” Abdul-Rahim said.

Then again, the kind of nuance some experts suggest — autonomous weapons are fine in one case, but not allowed in others — might be difficult to implement, and some assert that such plans might not be enough. “Amnesty International has consistently called for a preemptive ban on the development, production, transfer, and use of fully autonomous weapons systems,” Abdul-Rahim said. But it might already be too late for a preemptive ban, since some countries are already progressing in their development of AI weapons.

Still, Russell and numerous other experts have been campaigning for halting the development and use of AI weapons; the group of 116 global leaders recently an open letter to the United Nations on the subject. The U.N. is supposedly already considering a ban. “Let’s hope legal restrictions block this reality from happening anytime soon,” Rosenberg concluded.

There’s little question that AI is poised to revolutionize much of our world, including how we fight wars and other international conflicts. It will be up to international lawmakers and leaders to determine if developments like autonomous weapons, or faceless robotic guard dogs, would cause more harm than good.

The post Dark Future: Here’s When We’ll Have the Autonomous Guard Dogs from Black Mirror appeared first on Futurism.

Futurism

Dark Future: Here’s When We’ll Have the Black Mirror Tech That Lets us Share Physical Sensations

This article is part of a series about season four of Black Mirror, in which Futurism considers the technology pivotal to each episode and evaluates how close we are to having it. Please note that this article contains mild spoilers. Season four of Black Mirror is now available on Netflix.

A Twisted Museum

Miles and miles of desert and open highway  — and then, a small roadside museum, one of those things people pull over and visit to stretch their legs but rarely seek out on purpose. It looks uninhabited; its windows are barred with rusty metal, making it a dark blemish on the peaceful, dusty continuity of the desert.

Sometimes, things look exactly as they should…because what’s inside Rolo Hayne’s Black Museum is just as twisted and dark as its exterior suggests.

Image Credit: Netflix

“There’s a sad, sick story behind almost everything in here,” whispers Rolo Haynes, owner and proprietor, to the museum’s sole visitor. Haynes has collected criminological artifacts, each of which tells its own story of hope, pain, and horror. But unlike a collection of medieval torture instruments in the basement of a history museum (and in true Black Mirror fashion) each artifact was once a gleaming specimen of cutting-edge neurotechnology.

Rolo describes each artifact to the visitor in flashbacks. The first is a web of glowing diodes, draped over a mannequin head — the first piece of Rolo’s collection that foreshadows how “the main attraction came to be” (you’ll have to watch the show to see what that is exactly).

One of the most disturbing sequences hinges on headgear-like tech. In its former utility, the cap-like device, we learn, would gather information about the physical sensations of its wearer non-invasively. The information from the transceiver would then be sent wirelessly to a neural implant that was once installed in the base of a doctor’s skull, right behind their left ear. By slipping headgear gear onto a patient, the doctor could feel the physical sensations of the wearer.

The doctor could feel and experience the exact physical sensations of their patient, figure out whether or not they could say what was wrong, and often deliver a near-perfect diagnosis. The doctors wouldn’t experience any physical damage, no matter how severe the discomfort or pain, but the frequent sensations of pain have some, well, unforeseen consequences.

But how long will it be until we have to seriously contend with this technology and the potential consequences that it brings with it?

Picking Up Signals

A device that can transmit one person’s physical sensations to another is not as impossible as it sounds, though the technology has a long way to go to until it’s able to do so perfectly. The entire process can be split into three steps: (a) recording signals from the brain, (b) decoding them and translating them into a language that the receiver brain can understand, and (c) simulating the sensation in the receiver brain.

Let’s start with the first step: recording signals from the brain. Electrodes or fiber-optics can record information such as pain signals from the sender brain. And the hardware required to link a human brain to a computer has become smaller over time, making the prospect of implanting a device or antenna a very real possibility. But any of these devices, no matter how small, would need to be surgically placed in the brain, and such an invasive procedure involving the brain is still risky and imperfect.

The surgery itself is risky, but also the recipient’s body could reject the implant, or it could deteriorate or malfunction over time.

However, there’s a non-invasive way to do the same thing, in which a device reads brain signals from the surface of the skin. “If you go the non-invasive route, you have the luxury of recording from multiple sites, sometimes the entire brain, without any surgery. However, you lose precision,” says Andrea Stocco, an assistant professor at the Department of Psychology and Institute for Learning and Brain Sciences at the University of Washington in Seattle. That is, because the receiver is so far from where the signals are coming from in the brain, devices often can’t pinpoint their origin closer than a general area.

The method most often used to do that today is called an electroencephalogram (EEG). It measures electrical signals in the brain, and its headset looks similar to Haynes’ transceiver in the show. EEGs can help doctors diagnose and treat brain disorders in which a lot of signals are going haywire, such as epilepsy, but they’re not precise enough to do much more than that. “EEG headsets can be made portable and cheap, but they have terrible problems in isolating signals, since they tend to pick up signals from all over the brain,” Stocco says.

Image Credit: Netflix/YouTube

Sending Simplistic Signals

Steps two and three in the process — replicating these signals into patterns of neural activity that the receiver can understand — are perhaps the most difficult. Of course, without a perfect reading, translating and transmitting signals becomes much more difficult. Robert Gaunt, assistant professor of physical medicine and rehabilitation at the University of Pittsburgh, has taken on this challenge. But instead of relaying physical sensation from brain to brain, he has helped rehabilitate sensation in those lacking it — a device he’s working on allowed an amputee to feel touch again via a robotic arm.

Using electrical stimulation in the brain, “we can create perceptions that people would describe as being cutaneous, or touch, in nature at specific locations on the body,” Gaunt tells Futurism, no matter whether that part of the body is physically present or connected to the brain.

But your sense of touch is surprisingly complex, and simulating sensation in those who can no longer feel is still early in its development. So far, the technology can’t make many distinctions between those “cutaneous” perceptions — say, the temperature and pressure of holding an ice pack to the skin. And some sensations are more difficult to conjure than others, just because of the region of the brain that controls them. “It’s easier to create sensations of touch, pressure, vibration, or a tingle than it is with pain. And that has got to do with some detailed physiological reasons about the sizes of axons and nerve cells themselves,” Gaunt says.

Image credit: Netflix

Touch-based sensations are multimodal  a variety of sensors (nerves) in our hands send small snippets of different information to the brain, which synthesizes the entire sensation. To recreate that perfectly in a lab, scientists would have to manipulate each in the exact combination, and relay them at the right speed. In short, it would be a huge challenge.

Furthermore, no two brains are the same. “Neural codes differ from individual to individual. Although there is a fair degree of similarity, especially at the level of brain architecture, there are also many differences between individual brains,” Stocco says. So even if we could perfectly replicate all of these signals, “it is not possible to simply ‘copy’ a pattern of activity from one brain to another; you would need to adapt and ‘translate’ it,” Stocco says.

Translating these brain signals is still very complex. Neurons in the brain all act a little differently, and scientists are just starting to get a sense of how to manipulate the communication system between them. “Every time you stimulate a neuron, you create a complex cascade of effects in a dynamic system. That means that, even if you fire your probe at 50Hz [for example], the cells nearby might not be responding at the same frequency.”

So we’re still a ways from being able to stimulate the brain all that precisely.

Feeling The Future

So let’s get to it then: how far are we, exactly, from being able to read sensations, decode them, and successfully transmit them to a receiver brain? Without having to resort to highly invasive neural stimulation or applying electricity to the skin, Stocco believes the future lies in minimally invasive technologies. To record signals, “you could slip a network of tiny cortical sensors just underneath the skull, and have it reside permanently” — sort of the way the cap works in Black Mirror, except it would be placed directly on the brain. 

To stimulate the brain, Stocco is betting on ultrasound. You’ve probably heard of ultrasound— it’s been used in medicine since World War 2 to do things like take images of fetuses or opening the blood-brain barrier to deliver drugs. In 2014, researchers at Virginia Tech attempted to modulate neurons firing in the brain using a focused beam of ultrasound waves sent through the skull. The non-invasive experiment was not able make participants feel something that wasn’t there, but the ultrasound helped them better distinguish between two stimuli.

It’s not quite Black Mirror technology, but it’s an interesting finding that could warrant further study.

Even though the technology overall has a ways to go, it’s changing fast. And Stocco is optimistic that we could send physical sensations from one brain to the other pretty soon. “In twenty years, we have moved from crude pilots to having working limb prosthetics and cochlear implants, and as of now even a working memory prosthetic. My bet is that something close to a full neural interface that would let us feel what others feel could be reached by the end of 2038,” Stocco says.

The post Dark Future: Here’s When We’ll Have the Black Mirror Tech That Lets us Share Physical Sensations appeared first on Futurism.

Futurism