A continuing theme throughout Apple’s history, from early years of Steve Jobs to today, has been education. AppleInsider examines the efforts — some successful, some not — to appeal directly to the education market. AppleInsider – Frontpage News
As of this month, the US satellite Vanguard I has spent 60 years in orbit and it remains the oldest man-made object in space. Vanguard I was the fourth satellite launched into orbit — following the USSR's Sputnik I and II and the US' Explorer I. But… Engadget RSS Feed
Fish play a major role in human survival. Globally, more than one billion poor people depend on fish as their main source of animal protein, and 250 million depend on fishing and aquaculture for their livelihoods.
According to nonprofit research organization WorldFish, demand for fish is growing so much that the industry is struggling to meet it, and now, a new study published in Marine Policy suggests that we might have gotten some key numbers wrong in our past studies of global catch trends.
The authors of the study, Dirk Zeller from the University of Western Australia and Daniel Pauly from the University of British Columbia, believe that the poor quality of past recording and reporting methods may have caused researchers to overlook a significant portion of fish catches.
They argue that the seeming stability of fish catches is due to improved data collection in recent years. We now have a more accurate picture of the world’s fishing industry, but because we didn’t before, we didn’t really know how catches were trending.
For their study, the researchers made estimated adjustments to past records. They collaborated with 400 assistants across the globe as part of their Sea Around Us project, gathering data from every fishing country.
Based on their adjustments to past data, we are catching a lot less fish than we were 20 years ago. “Our reconstructed data have shown that globally, the catches have been declining by about 1.2 million metric tons a year since the mid-1990s,” Zeller told Oceans Deeply.
Fewer Fish or Less Fishing?
Some have criticized Zeller and Pauly for making too many assumptions about past data.
“We completely disagree with their conclusions of declining global catches,” Manuel Barange, the United Nations Food and Agriculture Organization’s (FAO) fisheries and aquaculture director, told Oceans Deeply. “Total landings have been very consistent for 20 years.”
If the team’s adjustments are accurate, though, they raise some important questions.
A decline in fish catches could simply mean that we are fishing less — perhaps diets are shifting in key areas of the world.
However, if the number of fishing boats at sea remained the same or even increased during the period of the study, as the authors suspect, the research could be a sign that the planet simply has fewer fish.
Past studies have noted the negative impact of pollution and climate change on fish populations, and if this study is evidence that global stocks have been declining for more than two decades, it adds new urgency to our need to address those problems.
The analysis, which was not limited to studies conducted in the U.S. and Canada, showed that GMO corn varieties have increased crop yields worldwide 5.6 to 24.5 percent when compared to non-GMO varieties. They also found that GM corn crops had significantly fewer (up to 36.5 percent less, depending on the species) mycotoxins — toxic chemical byproducts of crop colonization.
Some have argued that GMOs in the U.S. and Canada haven’t increased crop yields and could threaten human health; this sweeping analysis proved just the opposite.
For this study, published in the journal Scientific Reports, a group of Italian researchers took over 6,000 peer-reviewed studies from the past 21 years and performed what is known as a “meta-analysis,” a cumulative analysis that draws from hundreds or thousands of credible studies. This type of study allows researchers to draw conclusions that are more expansive and more robust than what could be taken from a single study.
There have been, for a variety of largely unscientific reasons, serious concern surrounding the effects of GMOs on human health. This analysis confirms that not only do GMOs pose no risk to human health, but also that they actually could have a substantive positive impact on it.
Mycotoxins, chemicals produced by fungi, are both toxic and carcinogenic to humans and animals. A significant percentage of non-GM and organic corn contain small amounts of mycotoxins. These chemicals are often removed by cleaning in developing countries, but the risk still exists.
GM corn has substantially fewer mycotoxins because the plants are modified to experience less crop damage from insects. Insects weaken a plant’s immune system and make it more susceptible to developing the fungi that produce mycotoxins.
In their analysis, the researchers stated that this study allows us “to draw unequivocal conclusions, helping to increase public confidence in food produced with genetically modified plants.”
While there will likely still be questions raised as GMOs are incorporated into agriculture, this analysis puts some severe concerns to rest. Additionally, this information might convince farmers and companies to consider the potential health and financial benefits of using genetically modified corn. Some are already calling this meta-analysis the “final chapter” in the GMO debate.
Written off not long ago as dead technology, film has recently been embraced by instant photographers and filmmakers like Quentin Tarantino and Christopher Nolan. A company called Reflex has capitalized on that trend by launching the manual focus, 35… Engadget RSS Feed
Despite all of the arguably worthwhile hype over artificial intelligence (AI) and artificial neural networks, current systems require huge quantities of data to learn, and experts have become increasingly concerned that future systems will, too. Now, the Google researcher responsible for much of the hype over neural networks has developed a new type of AI he believes will address this limitation: capsule networks.
Geoff Hinton outlined his capsule networks in a pair of open-access research papers published on ArXiv and OpenReview.net. He claims the papers prove ideas he’s been musing for nearly 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton said in an interview with Wired. “We’ve finally got something that works well.”
Each “capsule” in Hinton’s network comprises a small group of artificial neurons that cooperate to identify things. These capsules are organized into layers, and each layer of capsules is designed to identify specific features in an image. When multiple capsules within a layer agree on what they’ve identified, they activate the next layer, and then the next. The layers cascade onward until the network is sure about what it is identifying.
Presently, a computer must look at thousands of photos of an object from many different perspectives in order to recognize that object from different angles. Hinton told Wired he believes the redundancies in the layers will allow capsule networks to identify objects from multiple angles and in different scenarios.
So far, he seems to be right. In a test identifying handwritten digits, capsule networks were able to match the accuracy of the best old-school neural networks, and in a test identifying toys from multiple angles, they halved the error rate.
In 2012, Hinton and two graduate students at the University of Toronto proved that artificial neural networks could advance a computer’s ability to understand images by leaps and bounds. Their research lit imaginations afire in the tech world, and soon, all three were working for Google.
Still, Hinton isn’t convinced the technology is anywhere near as good as it could be. “I think the way we’re doing computer vision is just wrong,” he told Wired. “It works better than anything else at present, but that doesn’t mean it’s right.”
While Hinton acknowledges to Wired that his capsule networks work more slowly than existing image-recognition software and have yet to be tested on large collections of images, he is optimistic his new system will be an improvement on traditional neural networks once he’s able to address its shortcomings. Considering all we’ve been able to accomplish with the “wrong” kind of computer vision, just imagine what we’ll be able to do with the right.
Today, many of the world’s leading companies are in a one-of-a-kind race: To bring artificial intelligence (AI) to life. Already, machine learning systems are the core of many businesses, so it’s no surprise that updates about this AI or that neural net often pop up on our newsfeed. Such headlines typically read along the lines of, “AI beats human players in video game” or “AI mimics human speech” and even sometimes things like “AI detects cancer using machine learning.”
But just how close are we to having machines with the intelligence of a human—machines that we can talk with and work with like we do any other individual? Machines that are conscious?
While all of the aforementioned developments are real, Yann LeCun, Director of AI Research at Facebook and a professor of computer science at NYU, thinks that we may be overestimating the abilities of today’s AI, and, thus building up a bit of hype. “We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do,” LeCun told The Verge in an interview published last week. “Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat.”
This so-called artificial general intelligence (AGI) refers to an AI operator capable of performing virtually every task a human being could. Conversely, today’s AI specialize in particular tasks: for example, image or speech recognition, or identifying patterns by sifting through tons of data that machine learning AIs have been trained on. These specialized AIs are also called “applied AI” or “narrow AI” to highlight their rather limited intelligence.
Speaking to Futurism via email, Manuel Cebrian, one of the MIT researchers that developed Shelley, an AI horror storyteller, agreed with LeCun’s sentiments. “AI is just a great tool,” he said, adding that, “it seems to me, based on my work with Shelley, that AI is very far from being able to create professional-level horror fiction.” And thus, still quite far from human levels of intelligence.
LeCun clarifies that we shouldn’t devalue the significant work that AI researchers have made in recent months and years, but that work in machine learning and neural networks is not the same as developing true artificial intelligence. “So for example, and I don’t want to minimize at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant progress towards general intelligence, it’s wrong,” LeCun added. “It just isn’t.”
Pierre Barreau, CEO of Aiva Technologies, the company behind the music-composing AI Aiva, also thinks that the advancements that we have made towards synthetic intelligence are overstated. “AGI is a very hyped topic,” he noted via email. “I am, in general, quite optimistic about how fast tech develops, but I think a lot of people don’t realize the complexity of our own brain, let alone creating an artificial one.”
Making Artificial General Intelligence
People often use AI-related terms as synonymous with true artificial intelligence. News coverage drops terms like machine learning or deep learning together with artificial neural networks whenever AI is discussed. While each of these have something to do with AI, these aren’t exactly AI per se.
Machine learning is a tool: a set of algorithms that learn by ingesting huge amounts of data, from which an intelligent system is constructed. Similarly, deep learning refers to a kind of machine learning that is not task-specific. An artificial neural network, on the other hand, is a system that mimics the way the human brain works, and upon which machine learning algorithms are built.
All of these, AI experts believe, are the foundation for a synthetic intelligence with truly human cognition. But this is just the nascent stage; we have made a lot of progress, but current research isn’t really close to creating true intelligence.
So the big question is, when can we expect to have this type of intelligent AI? What’s the specific timeline?
For Luke Tang, general manager of AI startup accelerator TechCode, the shift will start with a “breakthrough in unsupervised learning algorithms.” Once this is accomplished, “machine intelligence can quickly surpass human intelligence,” he said in a statement sent to Futurism.
Needless to say, the path to this will be quite challenging. “In order to achieve AGI, there will need to be major breakthroughs not just in software, but also in Neuroscience and Hardware,” Barreau explained. He clarified, “We are starting to hit the ceiling of Moore’s law, with transistors being as small as they can physically get. New hardware platforms like quantum computing have not yet shown that they can beat performances of our usual hardware in all tasks.”
Indeed, for an AI to be considered truly intelligent, most agree that it has to pass at least five tests, foremost of which is the Turing Test—where a machine and a human both converse with a second human being, who will determine which one is a machine. Barreau said that he’s confident that we will see in our lifetime an AI passing the Turing Test; i.e., that it would pass as a human being. However, he says this won’t necessarily be “AGI, but good enough to pass as AGI.”
A Case for Augmented Intelligence
It goes without saying that an AGI is the prerequisite for the so-called singularity. If you aren’t familiar with the concept of “singularity,” it’s essentially that moment when intelligent machines surpass humankind’s levels of intelligence, spurring runaway and exponential technological growth that will transform the foundations of life as we know it. The term was coined in 1993 by Vernor Vinge, who wrote: “We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”
While this is something SoftBank CEO Masayoshi Son and Google’s Ray Kurzweil are excitedly looking forward to, other brilliant minds of today, such as Elon Musk, Stephen Hawking, and even Bill Gates, aren’t quite as keen for this moment. They assert that, in the same way that we don’t really understand what it means to have a super-intelligent AI, we’re also not prepared for whatever consequences the singularity would bring.
“We should focus our efforts on an exciting outcome of AI: augmented intelligence (i.e. human intelligence being augmented by AI),” Barreau said. Like Aiva and Shelley, other AIs have done considerably well when working side-by-side with human beings.
Still, with intelligent robots like Hanson Robotics’ Sophia and SotfBank’s Pepper, it does not seem very far-fetched to imagine truly intelligent machines living among us. Could Masayoshi Son’s super-intelligent AI, with an IQ of 10,000, be the cognitive machine intelligence we’re looking for? If that’s the case, we might have to wait for at least three more decades. “It’s probably only 30 to 50 years away,” Tang said. “So it is likely–it will just take some time to get there. But it also means many of us will have a chance to see that day come!”
Word surrounding Tesla’s all-electric semi truck has been trickling out since April, but the latest from Morgan Stanley analyst Adam Jonas proposes the truck’s existence could bring big changes to the trucking industry.
“We believe TSLA’s reveal of its autonomous, electric Class 8 semi-truck this month could be the biggest catalyst in Trucking in decades and potentially set off separation between the technology leaders and the laggards among carriers, shippers, truck OEMs and suppliers,” said Jonas.
Very little is known about truck itself, but it will reportedly have a range of 321-483 kilometers (200-300 miles). In his note, Jonas speculates the truck will cost $ 100,000, as much as 70% cheaper to operate, and be for sale by 2020. He also suspects the company will announce a partnership with major trucking companies to use the first wave of electric semis.
As for when the actual truck is unveiled, Tesla CEO Elon Musk said it would be shown in September, but didn’t specify a day. Current guesses set the reveal for late September, specifically between the 25th and the 28th, during the North American Commercial Vehicle Show in Atlanta, Georgia.
“I really recommend showing up for the semi truck unveiling — maybe there’s a little more than what we are saying here,” said Musk at Tesla’s June 2017 shareholder meeting.