RBC Captial Markets analyst Amit Daryanani believes that Apple will be pricing the 2018 iPhone X and iPhone X Plus at $ 899 and $ 999, respectively. Continue reading
iPhone Hacks | #1 iPhone, iPad, iOS Blog
While Siri has improved by leaps and bounds over the past few years, Apple’s intelligent assistant isn’t the game-changer many assumed it would become upon its introduction on the iPhone 4s back in 2011. If anything, Siri has seemingly been lapped by competing intelligent assistants from the likes of Google and Amazon. Even when it comes to something as basic as voice recognition, Siri tends to come in a step behind its rivals.
Apple of course has a large and dedicated team of talented engineers and researchers working on enhancing the Siri experience, a fact which begs the question: why is Siri not the premiere intelligent assistant on the market?
Tackling this question, Siri co-founder (recall that Apple acquired Siri in 2010) Norman Winarsky recently opined that Apple’s goals for Siri are simply too broad. In other words, Winarsky believes that Apple — to its own detriment — wants Siri to be good at many things instead of focusing on just a few areas.
Pre-Apple, Winarsky said, Siri was intended to launch specifically as a travel and entertainment concierge. Were you to arrive at an airport to discover a cancelled flight, for example, Siri would already be searching for an alternate route home by the time you pulled your phone from your pocket—and if none was available, would have a hotel room ready to book. It would have a smaller remit, but it would learn it flawlessly, and then gradually extend to related areas. Apple launched Siri as an assistant that can help you in all areas of your life, a bigger challenge that will inevitably take longer to perfect, Winarsky said…
“These are hard problems and when you’re a company dealing with up to a billion people, the problems get harder yet,” Winarsky said. “They’re probably looking for a level of perfection they can’t get.”
Apple of course is well-aware of many of the challenges currently facing Siri. Indeed, the company has made a number of acquisitions in recent memory as part of a broader effort to beef up Siri’s capabilities. As to any Siri improvements on the horizon, we’ll have to wait and see if Apple has any surprises in store for us come WWDC this coming June.
It’s also worth noting that some believe Siri’s shortcomings can be attributed to Apple’s obsession with keeping user data private and not sending it up to the cloud as other companies do. For what it’s worth, Apple has long maintained that exceptional AI capabilities and protecting user data are not mutually exclusive objectives.
“I think it is a false narrative,” Apple executive Greg Joswiak told Fast Company last year. “It’s true that we like to keep the data as optimized as possible, that’s certainly something that I think a lot of users have come to expect, and they know that we’re treating their privacy maybe different than some others are.”
On a somewhat related note, The Wall Street Journal a few months ago ran a story which mapped out how Siri — even with a multi-year lead — managed to cede ground to rivals like Amazon. It’s well worth a read and can be viewed over here.
Apple will continue to be a major customer of Dialog Semiconductor, the chip manufacturer’s chief executive has claimed in an interview, insisting Dialog will continue supplying components for use in a number of Apple products until 2020, despite rumors that the iPhone producer may change how it sources some of its power management hardware.
AppleInsider – Frontpage News
While many technology leaders are bullish about the positive aspects of AI, Alibaba CEO Jack Ma has warned it could trigger World War 3.
Ma points towards previous technological revolutions triggering some of the darkest periods in history.
“The first technology revolution caused the First World War, and the second technology revolution caused the Second World War,” said Ma, according to a story in Business Today. “Now we have the third revolution.”
Alibaba is a $ 24 billion Chinese giant which focuses on e-commerce, retail, internet, and other verticals which are constantly disrupted by new technology. Overall, the company is keen on AI and its potential, but Ma remains sceptical.
“Technology should enable people not disable them,” he said. “We should spend money on technology that empowers us and makes life better. The AI and robots are going to kill a lot of jobs as machines will replace humans in the future.”
Ma is not alone in his concerns. Tesla and SpaceX founder, Elon Musk, has famously voiced his worries on several occasions about AI becoming weaponised.
Back in September, Musk tweeted ‘It begins…’ in reference to Russian president Vladimir Putin claiming the nation which leads in AI ‘will become the ruler of the world.’
Just a month prior, AI News reported on a survey of security experts at Black Hat USA 2017. 62 percent of the infosec experts believe AI will be weaponised for use in cyberattacks within the next 12 months. With regards to who poses the biggest cybersecurity threat to the United States, Russia came out number one.
Fast-forward to November, and the first ‘Robot Ethics Charter’ is created by Andrey Neznamov — head of Russian robot research centre, Robopravo — in response to fears machines possessing AI could lead to the “destruction of humanity” if they’re not sufficiently regulated. That same month, prominent researchers began sending letters to their respective leaders calling for a global stand against AI militarisation.
Robert Work, a former deputy US secretary of defence, warned the US military must now decide if it wants to “lead the coming revolution, or fall victim to it” amid the emerging challenges from China and Russia.
Russian state media have reported on the military developing automated drones, vehicles, robots, and cruise missiles. China, meanwhile, has published a roadmap with its national plan to prioritise AI and use it for defence purposes.
Needless to say, the fears of Ma and Musk are not without substance. Without due oversight, there is potential for AI to be devastating. However, if you’re reading this, you’re likely aware of the immeasurable benefits if AI is developed and used ethically.
Google CEO, Sundar Pichai, just this week even went as far as to say AI was more important than fire or electricity in a one-on-one interview with WEF representatives. Certainly, without the latter, AI wouldn’t even be possible — but the sentiment about how revolutionary it will be remains.
“Anytime you work with technology, you need to learn to harness the technology while minimising the downsides,” Pichai said. “The risks [of AI] are substantial, but the way you solve it is by looking ahead, thinking about it, thinking about AI safety from day one, and to be transparent and open about how we pursue it.”
European leaders have recently been having their say on AI. German Chancellor Angela Merkel highlighted the risks and benefits which big data collection by foreign companies poses to her fellow countrymen during comments at the World Economic Forum.
“…Large American and Chinese companies are collecting more and more data while Europe is doing little,” she said. Part of this reason is the anxiety over the upcoming innovation-stifling GDPR regulations — which I warned in an editorial places European AI startups at risk of being left behind their international counterparts.
Great Britain is a leader of AI in Europe — with established players such as Google-owned DeepMind — and a new startup in the field launched, on average, every week for the last three years.
The UK must comply with GDPR regulations while it’s an EU member, but it may relax these rules once it’s left. British PM Theresa May is expected to use her keynote speech at a summit of world leaders in Davos today to call for ‘ethical oversight of AI’ – a message we can all hope is received by the international community.
(Image Credit: "E-commerce week: Youth Employment in the Digital Economy” by UNCTAD, used under CC BY-SA 2.0)
What are your thoughts on the AI concerns? Let us know in the comments.
Boeing CEO and president Dennis Muilenburg recently told CNBC‘s Squawk on the Street host Jim Cramer that his company will be beating Elon Musk to the Red Planet. “…I firmly believe the first person that sets foot on Mars will get there on a Boeing rocket,” he told Cramer after being asked whether he or Musk would “get a man on Mars first.”
Muilenburg’s answer reiterates a claim he made in October 2016 at The Atlantic‘s “What’s Next” conference — underwritten by Boeing — nearly word-for-word. There, he discussed impending innovations to low-Earth orbit space travel, space tourism and, almost as an afterthought, the first person on Mars. “I’m convinced that the first person to step foot on Mars will arrive there riding on a Boeing rocket,” he said at the recorded event.
The “Path to Mars” section on the Boeing website indicates that its Space Launch System (SLS) and crew transportation vehicle, Orion, are in production. Muilenburg described the SLS to Cramer: “This is a rocket that’s about 36 stories tall, we’re in the final assembly right now, down near New Orleans.” Muilenburg said the first test flight is happening in 2019, which will send the Orion on a test flight around the Moon.
While the 2019 launch date may sound impressive, it represents a one-year delay from the initial estimated timeframe. In April 2017, a report from the US Government Accountability Office said the original November 2018 SLS launch date was “likely unachievable as technical challenges continue to cause schedule delays.”
Ultimately, the plan that Boeing has laid out shows that the company plans on doing missions in Mars’s orbit in the early 2030s, and missions to the planet’s surface in the mid-to-late 2030s.
Whether that timeline can beat Elon Musk’s, however, remains a question.
Musk and his company SpaceX are almost synonymous with humanity’s quest to travel to and colonize the Red Planet. At the International Astronautical Congress in Adelaide, Australia in September 2017, Musk gave a progress report on the company’s goal of getting us there. He revealed a new reusable spacecraft capable of refueling in space, the Big Falcon Rocket (BFR), which he said could transport the first colonists to Mars in 2024.
Fortune Tech highlighted their coverage of Muilenberg’s most recent comments with a tweet saying: “Boeing CEO: We’re Going to Beat Elon Musk to Mars.” Never one to back down from a challenge, Musk responded to the tweet with two words: “Do it.”
The SpaceX CEO’s laconic response is in line with his previous thoughts on the subject at the 2016 International Astronautical Congress, which took place in Guadalajara, Mexico. There, Musk said, “I think it’s actually much better for the world if there are multiple companies or organizations building these interplanetary spacecraft. You know, the more the better.”
Much like when the US and the USSR jockeyed for space supremacy in the 1950s and 60s — each nation pushing the other towards achieving new exploration benchmarks on even shorter timelines — it seems this new space rivalry may ensure we get to Mars as quickly, and cheaply, as possible.
Stephen Hawking is one of the most respected minds in science today. He often speaks on a wide range of topics both within and outside of his particular expertise in theoretical physics. Some of Hawking’s most discussed topics including the search for alien life, climate change, artificial intelligence (AI), and how all of these things, and more, are going to spell the end of humanity once and for all.
Speaking at an event at Cambridge University last year Hawking said, “Our earth is becoming too small for us, global population is increasing at an alarming rate and we are in danger of self-destructing.”
He recognizes the pessimism of his assertations. Hawking cites the passing of the controversial “Brexit” plan for the UK to leave the EU as a reason for this enduring pessimism, saying that if the measure passes, “…I would not be optimistic about the long-term outlook for our species.”
Durwood Zaelke is the founder and President of the Institute for Governance & Sustainable Development (IGSD), an organization with a mission “to promote just and sustainable societies and to protect the environment by advancing the understanding, development, and implementation of effective, and accountable systems of governance for sustainable development.” Zaelke spoke to Futurism about Hawking’s comments.
“Mr. Hawking is spot on,” he began. “We’re chasing a fast-moving—indeed an accelerating problem of climate change, with slow-moving solutions, and we’re getting further behind every day.” As populations continue to boom, the detrimental environmental impact of humanity will only continue to worsen.
Like Hawking, Zealke believes that we are close to the tipping point where climate change becomes irreversible. Speaking in dire terms, Zealke said that “…we’ll soon face climate-driven chaos that will threaten our very civilization and our democratic form of government, while the fear of chaos feeds authoritarian regimes.”
In order to ensure the survival of our species, Hawking suggests humanity move beyond the confines of this planet. In June, Hawking told the BCC that “Spreading out may be the only thing that saves us from ourselves. I am convinced that humans need to leave Earth.” Colonizing other areas of the Solar System will certainly help to relieve some of this population pressure and therefore mitigate carbon emissions. Despite whatever potential this idea has, Hawking cites another threat to human-kind that there just might be no running away from.
Hawking often speaks about the development of artificial intelligence (AI) as the true perpetrator of the eventual demise of human beings. In an interview with WIRED Hawking said, “The genie is out of the bottle. I fear that AI may replace humans altogether.”
Hawking fears that we will develop AI that is too competent, “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” Other big names within science and tech share these same trepedations as Hawking.
Founder of OpenAI and CEO of SpaceX and Tesla, Elon Musk, also has concerns about the destructive potential of AI. Musk has moderated his own rhetoric comparatively to Hawking (though he did say that AI is more of a risk than North Korea) which is focused on the need to regulate the development of AI systems. “AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being,” Musk said.
Hawking believes that “some form of world government” should have control of the technology to make sure the machines don’t rise up and rebel like their terminating fictional counterparts.
However, these fears could be disproportionate to reality. Speaking to Futurism about all of this fear surrounding the development of AI, Pascal Kaufmann, the founder of Starmind and president of the synthetic intelligence development initiative the Mindfire Foundation, denies the likelihood of AI developing into a threat. “It is fascinating that when it comes to AI, our first thoughts often go towards being enslaved by our creations. That perspective makes for entertaining movies, but it does not mean that reality is doomed to walk the same path, or that it is the likely scenario,” he explained.
Kaufmann, however, does not deny the potential for destructive AI. “There are dangers which come with the creation of such powerful and omniscient technology, just as there are dangers with anything that is powerful. This does not mean we should assume the worst and make potentially detrimental decisions now based on that fear.”
Perhaps there is some way to trade the irrational fears for rational ones. Hawking certainly is right about the threat of climate change. Maybe if humans were more concerned with the scientific fact of climate change than the science fiction of killer robot overlords, we could start making real progress in getting our planet back on track.
The post Stephen Hawking Believes Humankind Is in Danger of Self-Destruction Due to AI appeared first on Futurism.
While Samsung continues to reap the rewards of being the world's largest Android partner, it also has its eyes set on the future of the connected home. "Samsung is very focused on the internet of things," said David Eun, the president of Samsung Next…
Engadget RSS Feed
According to Goldman Sachs, Bitcoin (BTC) could climb to nearly $ 4,000 in the near future. Near, in this case, is relative. But after BTC breaks through the “messy” period in which it currently resides, Goldman analyst Sheba Jafari believes the coin is headed to at least $ 3,212, and quite possibly $ 3,900-plus. As of this writing BTC was worth $ 2,600 per coin. Jafari notes that BTC has entered its fourth “wave” and that these periods “tend to be messy/complex.” We should expect more volatility, some sideways consolidation, and a new target of as much as $ 3,900 once it eventually passes into…
This story continues at The Next Web