As part of an interview with CEO Mark Zuckerberg, it was revealed that Facebook’s Messenger app scans and analyzes messages and photographs for what it deems concerning or unacceptable content.
[ Continue reading this over at RedmondPie.com ]
The QR code scanning feature in the stock Camera app suffers from an odd parser bug…. Read the rest of this post here
“Scanning QR codes in iOS 11 Camera app could take you to malicious websites” is an article by iDownloadBlog.com.
Make sure to follow us on Twitter, Facebook, and Google+.
Police in China are now sporting glasses equipped with facial recognition devices and they're using them to scan train riders and plane passengers for individuals who may be trying to avoid law enforcement or are using fake IDs. So far, police have c…
Engadget RSS Feed
At the Las Vegas Consumer Electronics Show, Occipital showed off its Structure mobile depth sensor that attaches to the back of an iPad to capture a 3D model of whatever is in front of it, including small and large objects, and the overall layout of a room — and AppleInsider just got one to try out.
AppleInsider – Frontpage News
Samsung’s upcoming flagships – the Galaxy S9 and S9+ – may feature a new, IntelligentScan feature for unlocking the phone. Spotted in the South Korean company’s Settings app, the description for the feature says it utilizes “iris scanner and face recognition together for better results even in low or very bright light.” Here’s a small video guide for the functionality: Details are scarce at the moment, so questions like how will the feature work if the user isn’t looking directly at the iris scanner remain unanswered (although, logically, facial scanning should take precedence in…
As the Galaxy S9’s launch date of February 25th creeps closer, we’re learning more and more about the upcoming flagship. Just two days ago, we even got a peek at the Galaxy S9 and Galaxy S9 plus in press render form. Now, after some digging around in Samsung’s settings app, a developer and AP reader reached out to us about his discovery of something called ‘Intelligent Scan,’ something we’ve never heard of.
Galaxy S9 may feature ‘Intelligent Scan,’ a combination of iris scanning and facial recognition was written by the awesome team at Android Police.
Vivo announced its Under Display Fingerprint Scanning Solution called Vivo Under Display based on an ultrasonic sensor at the MWC 2017 Shanghai back in June. Today at the CES 2018 it showed off world’s first ready-to-produce in-display fingerprint scanning smartphone that uses an optical fingerprint sensor from Synaptics. This lets users unlock their smartphone through one-touch fingerprint scanning directly on the smartphone display. This doesn’t require a physical button for the fingerprint sensor and allows a true full-screen display and an integrated unibody design. Availability of Vivo’s first in-display fingerprint scanning smartphone will be announced in early 2018. Commenting on the same, Alex Feng, Senior Vice President of Vivo, said: With our efforts in extensive consumer research and long-term R&D investment, Vivo is well positioned to pioneer the development of fingerprint scanning technology. We first presented a prototype of our fingerprint scanning solution at MWC Shanghai 2017 based on an ultrasonic sensor, and have remained committed to realizing our vision for future smartphones. Today’s showcase of a ready-to-produce in-display fingerprint scanning smartphone featuring an optical fingerprint sensor is a big leap forward in bringing consumers this long-awaited, futuristic mobile experience. We are very excited to make it available to consumers soon.
Back in December Synaptics announced a fingerprint scanner for smartphones that can be placed under the display. The company said a “Tier 1” manufacturer would introduce the first phone with the tech at CES 2018. Today vivo confirmed rumors from yesterday and showcased the device in Las Vegas, becoming the first company in the world to implement the technology in a smartphone. The in-display fingerprint sensor will sit between the mainboard and the OLED panel where it will illuminate the finger and then process the beams of light. Alex Feng, Senior VP at vivo, said in a press…
Self-driving cars are already driving on our streets, offering a clear sign of the changes coming to the traditional driving experience. But some veteran drivers may be reluctant to give up control of their vehicles to artificial intelligence (AI). To offer a middle ground between traditional driving and self-driving vehicles, automakers have implemented driver-assist features that can enhance a person’s driving experience, like braking when the driver isn’t paying attention, or assisting with parking.
Nissan, however, is proposing something different in that middle ground. The automaker announced at beginning of January 2018 that it is developing a Brain-to-Vehicle (B2V) interface that, if implemented, would increase a driver’s reaction times to make driving safer.
This human driver—semi-autonomous collaboration would see the latter predicting the former’s actions — be it turning the steering wheel or applying the brakes — by reading and interpreting their brain signals using Electroencephalography (EEG) technology. Upon doing so, the semi-autonomous vehicle would start those actions 0.2 to 0.5 seconds sooner. The automaker calls it, “Nissan Intelligent Mobility.” When it autonomous mode, the system could also adjust detect driver discomfort and adjust its driving style accordingly, or use augmented reality to alter what the driver sees.
“When most people think about autonomous driving, they have a very impersonal vision of the future, where humans relinquish control to the machines. Yet B2V technology does the opposite, by using signals from their own brain to make the drive even more exciting and enjoyable,” said Daniele Schillaci, Executive Vice President of Nissan, in a statement. “Through Nissan Intelligent Mobility, we are moving people to a better world by delivering more autonomy, more electrification and more connectivity.”
Nissan intends to demonstrate their results at the 2018 Consumer Electronics Show (CES) in Las Vegas next week, which should shed more light on the extent of the technology. The Verge notes that it’s largely unclear how the company has accomplished this task, though the accompanying video above shows people wearing a small, black headset. Beyond that, in 2016 The Verge reported that some neurologists had concerns about applying EEG tech to vehicles.
Of course, we won’t know for certain just how well Nissan’s Brain-to-Vehicle interface will perform until it’s unveiled at the CES. But a Nissan representative told The Verge, “it’s something that’s being shown in a relatively early phase, and is not yet close to implementation. We are aiming for practical application in 5 to 10 years.”
The idea of a brain-controlled interface is nothing new, though it’s no less exciting. Last September, researchers connected the human brain to the internet for the first time, turning it into “an Internet of Things (IoT) node on the World Wide Web.” Tesla and SpaceX CEO Elon Musk hopes to one day merge brains and and computers to give us the means to compete with AI.
There’s still the matter of how safe these brain-controlled interfaces will be, especially when it comes to driving, or whether the technology would be an improvement over wholly autonomous cars. Self-driving cars still have a long way to go before they’re the safest drivers on the road, yet we’ve seen multiple cases of humans being at fault when self-driving cars are involved in accidents. Perhaps Nissan’s demonstration next week,and other ongoing developments, will help us start to answer those questions.
The post Nissan’s Brain-To-Vehicle Interface Could Make Driving Safer by Scanning Your Brain appeared first on Futurism.
Coming up today: a giant security flaw, scanning millions of dead creatures and Iran’s social media block