Google’s AI-powered Google Lens rolls out on iOS

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Article Image

Google in a tweet on Thursday said its Google Lens visual search feature will roll out to iOS devices over the coming week as part of update to the company’s Google Photos app.
AppleInsider – Frontpage News

Cash For Apps: Make money with android app

Staqu introduces AI-powered Smart Glasses in India that can help identify threats like intruders and criminals

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Gurgaon-based Staqu today has launched the AI-powered Smart Glasses with inbuilt camera in India. It comes with speech and image recognition combined. The company says that it can identify potential threats to the civil society, such as criminals, intruders or terrorists. The Staqu AI-powered Smart Glass’s built-in camera can capture input to trigger Facial Recognition, and once the face is identified within the given databases, the Smart Glass projects the results on the glass screen. The entire process will happen in real-time as the user simply glances over the vicinity. According to the company, the Glasses will work even in wild scenarios as it fuses together speech and image recognition to utilize a hybrid identification technology and uniquely identify anyone. The information is streamed in real-time from a centralized server, and these glasses can further be controlled from the centralized administrative portal, and specific recognition targets for each glass can be set remotely. According to a ET report, Staqu will be starting a pilot of its smart glass platform with Punjab Police and will work very closely with them to identify to help identify criminals. It will be  provided on a yearly license-based model to customers. Commenting on the new announcement, Atul Rai, Co-Founder & CEO of Staqu said: At Staqu, …
Fone Arena
Cash For Apps: Make money with android app

Wyss Institute and Harvard develop AI-powered exosuit

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

AI exosuit

Researchers from the Wyss Institute and Harvard University have developed an algorithm than allows wearable robots to adapt to an individual’s movements in as little as 20 minutes – greatly increasing walking efficiency.

This points to an exciting future of assisted movement across a variety of new applications.

We all move differently, and when we walk, we’re constantly adjusting how we move to save energy – or more accurately, to reduce the metabolic cost. The efficiency of human walking comes from using our muscles to inject impulses at the right moments to preserve the pendulum-like motion of our legs and maintain our momentum.

For training athletes, fitness fanatics, patients who needs movement assistance, or anyone who may be recuperating from injury or illness, soft, assistive devices, like the exoskeleton being developed by Harvard Biodesign Lab can aid these movements by sensitively augmenting the wearer’s physiology, providing the right level of assistance, at the right time.

However, they need to be tailored to the wearer to suit their individual movements. For all the advanced material and robotics engineering that goes into such devices, personalising them is time-consuming and inefficient.

Gait keepers

Joint research by the Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) has created a machine learning algorithm that can quickly understand an individual’s characteristic movements and tailor the control strategies of soft, wearable exosuits to match.

In a Wyss Institute report, Ye Ding, a Postdoctoral Fellow at SEAS and co-lead author of the research, said:

This new method is an effective and fast way to optimise control parameter settings for assistive wearable devices. Using this method, we achieved a huge improvement in metabolic performance for the wearers of a hip extension assistive device.

The solution is known as a human-in-the-loop Bayesian optimisation method. It helps reduce the metabolic cost of the wearer when compared to walking without the device, or an un-optimised version, by providing personalised hip-assistance. Watch the video, below, for more details.

The algorithm quickly identifies the best control parameters for an individual – to minimise the energy required for walking – by measuring physiological signals, such as breathing rate, to identify the metabolic cost. As the video demonstrates, the system fine-tuned these parameters and adapted the exosuit to the wearer’s needs.

The benefits of an AI-powered exosuit

“Before, if you had three different users walking with assistive devices, you would need three different assistance strategies,” said Myunghee Kim, Ph.D., postdoctoral research fellow at SEAS and co-lead author of the paper.

As well as the time-saving advantages, the combination of algorithm and exosuit reduced metabolic cost by over 17 percent, an improvement of more than 60 percent over the team’s previous work.

Scott Kuindersma, Ph.D., assistant professor of Engineering and Computer Science at SEAS said:

Optimisation and learning algorithms will have a big impact on future wearable robotic devices designed to assist a range of behaviours. These results show that optimising even very simple controllers can provide a significant, individualised benefit to users while walking. Extending these ideas to consider more expressive control strategies and people with diverse needs and abilities will be an exciting next step.

wearable robotics
(Credit: Wyss Institute) Might we all have buns of steel in the future?

Internet of Business says

This pioneering research shows the far-reaching value that AI has, even in wearable robotics in which the wearer is essentially in control. The next step will be extending the AI’s capabilities to a more complex exoskeleton, assisting multiple joints at the same time.

Exosuits have huge potential across multiple fields. In healthcare, they could assist the elderly and disabled with their movements. In supply chain, manufacturing, construction and agriculture, they could assist with heavy lifting. Similarly, first responders in emergencies and military personnel could benefit from robotic aids.

While existing solutions are far from Iron Man levels of advancement, there is also some way to go in making exosuits more practical for prolonged wear. However, there’s no doubt that, at the current rate of progress, we’ll soon be seeing wearable robots outside the research lab.

The post Wyss Institute and Harvard develop AI-powered exosuit appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

Huawei P20 promo images show off the AI-powered triple camera

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Official-looking promo images show off the triple camera setup on the Huawei P20 series and they reveal the ace up its sleeve – AI. The tagline reads “See mooore with AI” (three Os for three cameras, get it?). Previous images of the P20 showed only two cameras, so the three camera setup may be reserved for the Pro model. It’s a little strange that the third camera is on a separate “island”, but perhaps there’s a good reason for that – we’ll find out on March 27. Official-looking promo images for the Huawei P20 series And since the notch is a controversial topic, this is one of…

GSMArena.com – Latest articles

Cash For Apps: Make money with android app

Google Drive is getting AI-powered organization for shared files

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Google Drive tends to pick up a lot of shared files, especially if your place of business uses G Suite. The “Shared with Me” section can end up rather messy as a result, but Google is now looking to clean it up with the help of artificial intelligence. In the coming days, Drive will begin guessing which files you want to open.

According to Google, searching for shared content by owner is the most popular way of finding things.

Read More

Google Drive is getting AI-powered organization for shared files was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

Cash For Apps: Make money with android app

A New AI-Powered App Transcribes Your Conversations in Real-Time

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Transcription on the Go

If you have to deal with transcribing interviews as part of your daily work (like we do), you’ll find a welcome partner in the new Otter app. Developed by former employees from Google and a speech-recognition veteran Nuance, Otter is a free service that transcribes speech on the go through the power of artificial intelligence (AI).

Voice transcription services aren’t new. There are a number of apps available out there, sure, but none seem to work like Otter — and we’re not even talking about the AI aspect yet. Most voice-transcription apps that are free aren’t very accurate, and those that work really well are often too expensive. Additionally, none transcribe in “real-time” as Otter does.

The 5 Weirdest AI Applications [INFOGRAPHICS]
Click to View Full Infographic

AISense, the startup that developed Otter, saw an opportunity here. There was a market ready for Otter to penetrate, as it proved during its launch at the Mobile World Congress this past week. “This is a perfect time,” AISense CEO and founder Sam Liang told CNet.

This app not only has market trends working in its favor, but it also benefited from a ton of work that has been done recently on voice and AI. There are speech recognition algorithms, which most of us are familiar with because of virtual assistants trained to “talk” to us — Apple’s Siri, Amazon’s Alexa, and Google’s creatively named Assistant. In fact, Amazon is supposedly close to developing another “real-time speech translation” service using Alexa.

On top of this, promising algorithms have been built to produce synthetic speech. Google’s DeepMind proved it can already mimic human speech with astonishing accuracy and clarity.

Voiceprints

All of these developments made it possible to design the Otter app, Liang explained. “With AI tech and deep learning in the last few years, the accuracy for speech recognition has improved dramatically. A few years ago, this system wouldn’t be usable,” he told Cnet.

Otter has a rather simple but intuitive approach to voice transcriptions. As soon as you install the app, available for free for both Android and Apple users, it asks you to do a short and long recording — which you start by pressing the app’s mic icon. These become the basis for your “voiceprint” so that Otter can identify you in the recordings you make.

Otter saves your voice and tags your transcriptions.

Why does it need to identify you? Well, because Otter’s live transcriptions are ideally separated by each speaker. Also, the raw transcript of a live conversation you’re recording appears almost immediately in front of you. Otter’s AI also automatically puts tags in every recording and transcription you save for easier file management.

Of course, it isn’t flawless. Otter has certain issues with punctuation, which it tends to leave out, and has difficulty working in crowded places or with loud noise in the background. Plus, you can’t transfer audio recordings not done directly using the app.

Still, for those who do interviews, take copious notes during classes or meetings, or would simply like a hands-free way to record their thoughts as text, an app like Otter could make life much easier. After all, who transcribes speech for the fun of it?

Better try it out while it’s still free, though. AISense plans to implement a subscription model to access extra features later on.

The post A New AI-Powered App Transcribes Your Conversations in Real-Time appeared first on Futurism.

Futurism

Cash For Apps: Make money with android app

[Update x2: Leaked photo] LG reportedly working on updated V30 with more storage and AI-powered ‘LG Lens’

Cash For Apps: Make money with android app

The LG V30 was somewhat of a mixed bag.

Read More

[Update x2: Leaked photo] LG reportedly working on updated V30 with more storage and AI-powered ‘LG Lens’ was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

Microsoft and Xiaomi are partnering to make AI-powered speakers, smartphones, and more

Microsoft and Xiaomi have signed a memorandum of understanding (MoU) to work closely in cloud computing, AI, and hardware. It has so far been uncommon for a US company to partner with a Chinese company on artificial intelligence, but it definitely makes sense as both countries are the biggest markets for those products and services.

Microsoft is planning to allow Xiaomi to use its cloud computing products, including Azure, to develop upgraded phones, laptops, and smart devices to bring them to an international market. At the same time, the partnership will also give Microsoft more reach and access to the Chinese market.

Xiaomi

The two are also in discussions about possibly integrating Microsoft Cortana with the Mi…

Continue reading…

The Verge – All Posts

Microsoft and Xiaomi will pair up on AI-powered speakers and hardware

In July, Chinese tech giant Xiaomi jumped into the smart speaker race with its answer to Amazon's Alexa and Google Home, the $ 45 Mi AI — though it probably won't find its way to American shores, given how had a time it's had penetrating the US and E…
Engadget RSS Feed