Apple sponsoring ‘Machine Vision Conference’ in Israel, will lead discussion on the iPhone X’s True Depth system

Apple is slated to play a role in the Israel Machine Vision Conference in Tel Aviv next month. The company is listed as a sponsor for the event, while one of its video engineering leads is slated to lead a presentation…

more…

9to5Mac

LG to introduce Vision AI with refreshed V30 at MWC 2018

Back in January, Korean media suggested a new LG phone with AI capabilities will arrive at MWC 2018 and today LG confirmed it in a press release. The manufacturer will introduce Vision AI for smartphones in Barcelona, and the technology will be featured in the 2018 version of “LG’s most advanced flagship smartphone to date” – the V30. LG V30 in Raspberry Rose Vision AI will automatically analyze objects and recommend the best shooting mode among eight options: portrait, food, pet, landscape, city, flower, sunrise, and sunset. The tech will take into consideration angles of view,…

GSMArena.com – Latest articles

MIT’s NanoMap vision helps drones to see complexity at speed

csail nanomap allows drones to fly with uncertainty

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a sophisticated computer vision system for flying robots.

NanoMap allows drones to navigate through dense environments at 20 miles per hour.

Drones’ abilities are taking off

Today’s commercial drones far exceed the capabilities of their predecessors. But if they are to take on more complex or commonplace roles in the workplace, they need to get much smarter and safer.

The vast majority of drones deployed in construction, media, or agriculture applications have some form of computer vision. At the very least they can sense obstacles directly in front of them and avoid collisions.

Some, like DJI’s latest model and those enhanced with Intel’s RealSense technology, can detect obstacles in multiple directions and plot a path around them.

However, CSAIL’s NanoMap system aims to take that awareness to the next level.

As outlined in a new research paper, NanoMap integrates sensing more deeply with control. It works from the starting point that any drone’s position in the real world is uncertain over time.

The new system allows a drone to model and account for that uncertainty when planning its movements – as this video reveals.

Navigating around warehouses to check stock levels or move items from one place to another is just one example of the kind of dynamic environments where drones will need to operate safely.

This ability will be vital in helping drones’ commercial applications to spread.

Read more: CSAIL team pairs robots with VR for smart manufacturing

SLAM dunk scenarios

Developing drones that can build a picture of the world around them and react to shifting environments is a challenge. This is particularly true when computational power tends to be proportional to weight.

Simultaneous localisation and mapping (SLAM) technology is a common way for drones to build a detailed picture of their location from raw data. However, this technique is unreliable at high speed, which makes it unsuitable for tight spaces, or environments where objects are being moved, or the layout is dynamic.

“Overly confident maps won’t help you if you want drones that can operate at higher speeds around humans,” said graduate student Pete Florence, lead author on a related paper.

“An approach that is better aware of uncertainty gets us a much higher level of reliability in terms of being able to fly in close quarters and avoid obstacles.”

Read more: Pyeongchang Winter Olympics to be defended by drone-catching drones

NanoMap works with uncertainty

Using NanoMap, a drone can build a picture of its surroundings by stitching together a series of measurements via depth-sensing. Not only can the drone plan for what it sees already, but it can also plan how to move around areas that it can’t see yet, based on what it has seen already.

“It’s like saving all of the images you’ve seen of the world as a big tape in your head,” explains Florence. “For the drone to plan its motions, it essentially goes back into time to think individually of all the different places that it was in.”

NanoMap operates under an assumption that humans are familiar with: if you know roughly where something is and how large it is, you don’t need much more detail if your only aim is to avoid crashing into it.

By accounting for uncertainty in its measurements, the NanoMap system has reduced the team’s crash rate to just two percent.

“The key difference to previous work is that the researchers created a map consisting of a set of images with their position uncertain, rather than just a set of images with their positions and orientation,” says Sebastian Scherer, a systems scientist at Carnegie Mellon University’s Robotics Institute.

“Keeping track of this uncertainty has the advantage of allowing the use of previous images, even if the robot doesn’t know exactly where it is. This allows for improved planning.”

Internet of Business says

As drones spread into more and more vertical applications, such as farming, manufacturing, critical infrastructure maintenance, building, environmental monitoring, security, law enforcement, broadcasting, autonomous cargo, deliveries, and even public transport, their safety around human beings, and in complex environments, becomes ever more important to demonstrate.

Light-touch regulation is a good idea, but public safety must remain paramount.

Over time, the regulatory environment will roll back to accommodate drones as safety improves. But until then, it will remain cautious and conservative – except in remote areas, such as over the sea at offshore wind farms or oil rigs.

MIT should be congratulated for this latest innovation in drone safety, but progress remains incremental.

The core lesson is this: a two per cent crash rate is impressive, but it’s still unacceptable. In enterprise software or cloud services, no one would accept 98 per cent reliability; so it’s certainly not acceptable with industrial machinery in public spaces.

Battery operated, rotary wing, autonomous vehicles have multiple points of failure. In smart cities, factories, or other public spaces, a single catastrophic incident could set back the industry for years. It is incumbent on all of us to ensure that no one is harmed.

 

The post MIT’s NanoMap vision helps drones to see complexity at speed appeared first on Internet of Business.

Internet of Business

LG details its Vision AI and Voice AI, which will debut on the 2018 version of the V30

Last week we heard rumors that LG would announce an updated version of the LG V30 at Mobile World Congress, which lines up well with the previous reports that the company won’t release new devices on a yearly schedule and that the expected G7 was scrapped and a new design was being worked on. The other part of the V30 leak was an emphasis on the addition of an AI of sorts called “LG Lens.” LG, as it always does prior to major events, has out-leaked everyone and confirmed this by announcing a new 2018 version of the V30 which comes with new AI functionalities.

Read More

LG details its Vision AI and Voice AI, which will debut on the 2018 version of the V30 was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

LG announces Vision AI, Voice AI features for new V30 model

LG Vision AI announcement

Typically when LG’s got a big announcement coming, the company likes to drop tidbits about it in the weeks leading up to the big reveal. It looks like that’s going to be the case again ahead of MWC 2018.

LG plans to launch new smartphone artificial intelligence (AI) technologies at MWC. The first is called Vision AI, which is meant to make a smartphone’s camera smarter. Vision AI analyzes objects that you’re pointing your camera at and recommends the best shooting mode from the eight options available: portrait, food, pet, landscape, city, flower, sunrise, and sunset.

LG explains that its Vision AI analyzes several aspects of an object including angle of view, color, reflections, backlighting and saturation levels. The company developed this technology by analyzing over 100 million images to develop the image recognition algorithms.

Vision AI can can also provide shopping info like the best price on a product or similar products to the one that you’ve scanned. The Vision AI can automatically brighten images in dim environments using a new low-light shooting mode, too.

LG Vision AI features

The other new AI tech coming from LG is Voice AI. LG has worked with Google Assistant to craft 9 new commands to let you perform actions using just your voice. These include taking a panoramic photo, taking a food photo, and performing a shopping search.

LG says that these new features will be included with “the 2018 version of the LG V30.” Some of these AI features will be made available to older smartphones through software updates, too, but LG didn’t elaborate on which models might get which features.

The LG V30 has placed a focus on photography since its launch last year, so it’s no surprise that LG is planning to build on that with new AI technologies for a new version of the V30. We’ll have to wait until MWC before we know everything about this updated V30, but rumors have suggested that it’ll come equipped with AI camera features and more built-in storage.

PhoneDog.com – Latest videos, reviews, articles, news and posts