Amazon’s Music Storage service will remove your MP3 files on April 30th

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

We’ve known since last year that Amazon Music was planning to shut down its dedicated cloud music locker. Now, we have a date for when that process will begin. In an email to Amazon Music users, the company says uploaded songs will be removed from a user’s library on April 30th, 2018. You can however keep any music in the cloud by proactively going to your Music Settings and clicking the “Keep my songs” button.

Back in December, Amazon stopped letting users upload new tracks to Music Storage, which holds up to 250 songs for free. The company said at the time that by January 2019, users wouldn’t be able to download or stream tracks they’ve uploaded to Music Storage, so it sounds like you’ll still have many months between April and next…

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

Sage: Why gender-neutral AI helps remove social bias

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

AI technologist Kriti Sharma (pictured) has the ambition of bringing greater diversity and accountability to the algorithms that guide our decisions and sift through our data.

Since starting at UK software company Sage, she has been working on a gender-neutral virtual assistant, Pegg, which is designed to manage customers’ business finances. She has also published a set of core ethical principles for designing AI systems.

Sharma, 29, is now VP of AI at the Sage Group, and is one of a growing number of women with high-profile roles in the artificial intelligence sector.

For example, the UK’s new Office for AI is jointly run by Gila Sacks, director of digital and tech policy at the Department for Digital, Culture, Media, and Sport (DCMS), and Dr Rannia Leontaridi, director of Business Growth at the Department for Business, Energy and Industrial Strategy (BEIS).

Read more: Top priest shares ‘The Ten Commandments of A.I.’ for ethical computing

AI and bias reinforcement

Driving Sharma’s work at Sage is her fear that AI and the fourth industrial revolution will entrench inequality rather than provide solutions to it. Instead of emerging technologies easing problems such as gender, race, and age inequality, the risk is that they could perpetuate them by cementing biases that already exist in human society.

This issue is explored in this external report by Internet of Business editor Chris Middleton.

Speaking to Middleton last year at the Rise of the Machines summit in London, Sharma described herself jokingly as “a token millennial” who had been brought into Sage to shake things up. She explained her belief that the technology industry’s efforts to create human-like software is a strategic error. Instead, AI should “embrace its botness”, she said.

Sharma went on to make the point that many AIs tend to be feminine personalities with female voices, and are designed to respond to routine commands. Meanwhile, some industry-specific systems – in legal services and banking, for example – are often designed to be ‘male’. In this way, she suggested, we risk “projecting workplace stereotypes onto AI” and, by doing so, we reinforce them.

Sharma has expanded on that view in an interview this week. “Despite the common public perception that algorithms aren’t biased like humans, in reality, they are learning racist and sexist behaviour from existing data and the bias of their creators. AI is even reinforcing human stereotypes,” she told PRI.

She shared the example of recent research from Boston University, in which technologists developed an AI program using input from Google News. When the system was asked, “Man is to computer programmer as woman is to X,” the response was “homemaker.”

Unchecked bias such as this both reflects the mass of data stored by human society to date, and highlights the care that programmers need to take when designing software for everyone. (Countless other examples of bias in AI systems – in which racial prejudice appears to be the most commonplace – are included in Middleton’s independent report.)

Read more: AI regulation & ethics: How to build more human-focused AI

Developing a gender-neutral virtual assistant

Sharma’s gender-neutral AI assistant Pegg symbolises her attempt to ensure that technology helps to tackle deeply embedded social and cultural stereotypes. Unlike the domesticated Amazon Alexa or the down-to-business IBM Watson, Pegg is designed to be a sidekick without obvious stereotypes, she said.

“Pegg is proud of being a bot and does not pretend to be human. Initially, there was a lack of awareness within the company and the outside world of stereotypes in AI, but I found it very encouraging that I got a very welcoming response to my efforts.”

Read more: IBM launches new Watson Assistant AI for connected enterprises

Accountability and transparency

According to Sharma, the two key components in developing AIs that reflect social diversity, rather than existing prejudices, are accountability and transparency. Only by understanding the full end-to-end development processes that any artificial system goes through can we check for inherent bias and keep its designers accountable.

“AI needs to reflect the diversity of its users,” she told the Financial Times earlier this month. This means using data sets that are as diverse as possible and making software that’s applicable to everyone.

For example, Google’s image-tagging algorithm was widely condemned in 2015 after it accidentally labelled black people as ‘gorillas’ – an issue that was only rectified by removing the offending word and associated terms from the system completely. The mistake was rooted in the data sets used to train the system.

Related problems have been identified across a whole range of AI systems, from imaging technologies that have long been optimised to identify light skin tones, to the MIT facial recognition system that was unable to identify a black woman, because the training data was compiled by, and among, a closed group of young white males.

The latter example was shared by MIT Media Lab chief Joichi Ito at the 2017 World Economic Forum in Davos, where he called his own students “oddballs”.

Ito suggested that many coders prefer the binary world of computers to the messier and more complex world of human beings. Most coders are young, white males, he added, and this lack of diversity in the technology community is often reflected in the systems that developers design, test, and release.

Some AI systems have also been shown to be better at identifying men than women – again, because of biases in the training data.

“AI is a fascinating tool to create equality in the world,” said Sharma. “When I’ve worked with people from diverse backgrounds, that’s where we’ve had the most impact.

“AI needs to be more open, less elite, with people from all kinds of backgrounds: creatives, technologists, and people who understand social policy… getting together to solve real-world problems.”

Plus: The five pillars of AI

In related news, analyst Ray Wang of Constellation Research today published an opinion on AI ethics, in which he suggested that there should be five pillars of development.

Wang said that AI should be: Transparent, so that algorithms, attributes, and correlations should be open to inspection for all participants; explainable, so that humans should be able to understand how AI systems come to their contextual decisions; reversible, so that organisations are able to reverse the learnings and adjust as needed; trainable, so that systems have the ability to learn from humans and other systems; and human led, so that all decisions begin and end with “human decision points”.

But he added, “Prospects of universal AI ethics seem slim. However, the five design pillars will serve organisations well beyond social fads and fears.”

Internet of Business says

We salute Sharma’s work and her commitment to both addressing these problems herself and raising awareness of the issues. It’s notable, too, that this was her personal choice, proving that one person can make a big difference if they set out to do so.

The underlying problem is easy to express: it is not that developers are themselves knowingly biased or prejudiced (necessarily) – it would be a mistake to label the technology community has inherently racist, for example; it is more that most AI systems rely on human beings to train them with data.

Any existing bias in that data – for example, in legal systems that have exhibited a bias against any ethnic or social groups over decades of case law – will then be picked up by the system. Equally, any lack of diversity in the technology community itself – which is known to be overwhelmingly white and male – also risks finding its way into the systems that community designs.

Last year, UK-RAS, the UK’s umbrella organisation for robotics and AI research, quoted figures suggesting that 83 percent of people working in science, technology, engineering, and maths (STEM) careers are male. Among coders, the split is closer to 90 percent male to 10 percent female, with an even stronger bias towards white employees. The systems they produce must not be allowed to reflect these biases.

Read more: Women in tech: the £150bn advantage of increasing diversity

Read more: AI regulation & ethics: How to build more human-focused AI

Read more: Women in AI & IoT: Why it’s vital to Re•Work the gender balance

Additional reporting: Chris Middleton.

The post Sage: Why gender-neutral AI helps remove social bias appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

How to Hide and Remove System Preference Panes in macOS

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

In macOS, the System Preferences app located in the Applications folder is where you can adjust various settings to customize your Mac. Most system preference panes are native to macOS and cannot be removed – although they can be hidden. In this article, we’ll show you how it’s done.

Occasionally, third-party apps installed on your Mac will insert their own preference panes in the bottom row of the System Preferences panel. Sometimes these panes will pointlessly stick around even after you’ve uninstalled the associated app. Thankfully though, they can be removed separately. To jump to our instructions on how to do that, click here.
Continue reading How to Hide and Remove System Preference Panes in macOS

How to remove Facebook app permissions and protect your privacy

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Facebook lets third-party apps abuse your private, personal data and the private, personal data of your friends. To stop it, you have to remove those apps from Facebook.

Facebook login makes it feel like less of a hassle to sign into apps, games, and services. But when you use Facebook to log in, Facebook gives those apps access to your data — a lot of your data. Worse, Facebook gives those apps access to the date of your friends, even if those friends haven’t downloaded the app or consented in any way.

To prevent it, you have to prevent those apps from accessing your Facebook account. If it’s too late for that, you have to delete them from Facebook so they can’t keep accessing it.

Here’s how.

How to revoke app permissions on Facebook for iPhone

  1. Launch Facebook from your Home screen.
  2. Tap on the Menu icon at the bottom right.
  3. Tap on Settings near the bottom.
  4. Tap on Account Settings
  5. Tap on Apps near the bottom.

  6. To prevent any app or site from accessing your data:

    1. Tap on Platform.
    2. Tap on Edit.
    3. Tap on Turn off Platform.
  7. To remove single apps:

    1. Tap on Logged in with Facebook.
    2. Tap on the app you want to remove.
    3. Tap Remove app at the bottom.

If even that’s not enough, here’s:

Any questions about Facebook and privacy?

If you have any questions, comments, or concerns about Facebook and your privacy, drop them below!

iMore – Learn more. Be more.

Cash For Apps: Make money with android app

How to create or remove a passcode on your Apple Watch

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Don’t have a passcode on your Apple Watch? Here’s why you should make one!

Like most modern devices, you can secure your Apple Watch with a passcode. Adding a passcode will lock the watch whenever you remove it from your wrist; to reactivate it, you need only type in the code. Unlike the iPhone, iPad, or Mac, this passcode is solely numeric and limited to fewer characters than your larger devices.

Why you should add a passcode to your Apple Watch

While the Apple Watch asks you to create a passcode when you first set up your device, you can bypass this requirement. But without a passcode, you can’t use many of the Apple Watch’s best features, including:

  • Apple Pay (contact-based)
  • Apple Pay on Mac (authorizing Apple Pay transactions from your Mac)
  • Auto Unlock for Mac
  • Unlock with iPhone, which automatically unlocks your Apple Watch for you whenever you unlock your iPhone

In addition, not having a passcode means that any user could conceivably steal your watch and have access to your recent Health data, third party app data, and more.

TL;DR: The momentary loss of convenience is more than worth the benefits that come from having a passcode.

How to add a passcode to your Apple Watch

If you bypassed adding a passcode when first setting up your Apple Watch, you can create one at any time directly on your wearable.

  1. Open the Settings app on your Apple Watch.
  2. Scroll down and tap Passcode.
  3. Tap Turn Passcode On.
  4. Enter a passcode to lock and unlock your Apple Watch.

If you’d like your iPhone to unlock your Apple Watch, you can enable the Unlock with iPhone switch; I also recommend enabling the Wrist Detection switch for extra security.

Questions?

Let us know in the comments.

iMore – Learn more. Be more.

Cash For Apps: Make money with android app

How to add, remove and rearrange apps in your Apple Watch’s Dock

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

The Dock on your Apple Watch lets you open your favorite apps or go from one app to another…. Read the rest of this post here


How to add, remove and rearrange apps in your Apple Watch’s Dock” is an article by iDownloadBlog.com.
Make sure to follow us on Twitter, Facebook, and Google+.

iDownloadBlog.com

Cash For Apps: Make money with android app

Snapchat and Instagram remove Giphy feature due to racial slur GIF

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Snapchat and Instagram have temporarily removed their Giphy GIF sticker features after users saw an extremely racist GIF as an option to add to their images. Snapchat confirms to TechCrunch “As soon as we were made aware, we removed the GIF and have disabled Giphy until we can be sure that this won’t happen again . . . while we wait for Giphy’s team to take a look at it.”

A source tells TechCrunch the same racist GIF was spotted in Instagram as well, indicating that Giphy is at fault. A tweet by Lyauna Augmon shows the GIF being used within Instagram. Giphy appears to have removed the GIF as it’s no longer available in Instagram. An Instagram spokesperson tells TechCrunch “This type of content has no place on Instagram. We have stopped our integration with Giphy as they investigate the issue.” The company confirms the change has been made but might take some time to propagate to all users.

[Update: This article has been updated to show that the GIF also appeared on Instagram, not just Snapchat.]

The Snapchat spokesperson says that all GIFs in Snapchat are meant to be “rated PG,” meaning they’re mostly suitable for the 13-and-up teens that are technically allowed on Snapchat.

The GIF includes disturbing text including a racial slur, which TechCrunch has blurred out below. It reads “N—– Crime Death Counter – Keep Cranking Bonzo, the Numbers Just Keep on Climbing!” TechCrunch received a screenshot of the GIF on Snapchat from a reader. Warning: The image below may be disturbing to some:

Snapchat’s official statement is “We have removed GIPHY from our application until we can be assured that this will never happen again.” A Snapchat spokesperson tells me the company is very sorry. The Giphy community guidelines prohibit this kind of objectionable content in the first place, but since it works like a search engine that indexes the top GIFs on the web, things can slip through.

Here’s the version spotted on Instagram, censored by TechCrunch:

We’ve reached out to Giphy for comment but haven’t heard back.

[Update 3/10: Giphy has now provided a statement to TechCrunch, admitting it was to blame for a bug allowing the offensive GIF through. A spokesperson tells us:

“A user discovered an offensive GIF sticker in our library, and we immediately removed it per our content guidelines.

After investigation of the incident, this sticker was available due to a bug in our content moderation filters specifically affecting GIF stickers.  We have fixed the bug and have re-moderated all of the GIF stickers in our library.

The GIPHY staff is also further reviewing every GIF sticker by hand and should be finished shortly.

We take full responsibility for these recent events and sincerely apologize to anyone who was offended.”

Snapchat only launched the Giphy integration on February 20th so people could jazz up their photos and videos with moving images curated as safe by the Giphy team. TechCrunch broke the news on Instagram building a similar Giphy integration in late January, which launched a week later.

This isn’t Snapchat’s first run-in with racist content. Back in 2016 it was heavily criticized for creating an Asian “yellowface” stereotype augmented reality lens that gave people slanted eyes. Snapchat risks an unsavory reputation if it can’t keep its content under control. The slip-up could deter Snapchat from working more with outside developers, which it’s only recently allowed to bring content into its app via its Lens Studio and the Giphy integration.

The incident is embarrassing for Instagram’s parent company Facebook. It also casts doubt on Facebook’s Messenger Kids app, which also has Giphy integration that is only supposed to show G-rated imagery.

Snapchat and Instagram will have to decide whether they want help from outsiders even if it can’t guarantee the quality or safety of their content, or whether it will go it alone as they compete against each other.

Mobile – TechCrunch

Cash For Apps: Make money with android app

Instagram & Snapchat remove Giphy integration due to sticker w/ racial slur

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Earlier this year, both Snapchat and Instagram added integration with popular GIF service Giphy, allowing users to add the stickers to their posts and stories. As first reported by TechCrunch, however, both social networks have now removed that integration due to a GIF with a racial slur…

more…

9to5Mac

Cash For Apps: Make money with android app

How To Remove Electra iOS 11 Jailbreak From Your iPhone Or iPad

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Here’s how to remove or uninstall Electra iOS 11 jailbreak or unjailbreak with Cydia on your iPhone or iPad device.

[ Continue reading this over at RedmondPie.com ]

Redmond Pie

Cash For Apps: Make money with android app