Google will predict Final Four winners based on in-game data

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Google has been working closely with the NCAA during this year's tournament, but now, during the Final Four, the company will be using predictive analytics to figure out who will win games. The wrinkle here is that the team will use data from the fir…
Engadget RSS Feed
Cash For Apps: Make money with android app

Sketchy supply-chain reports predict record-breaking iPhone sales this year

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Supply-chain reports predicting Apple’s order for iPhone displays across 2018 would, if accurate, point to record-breaking sales for the year.

Taiwanese supply-chain sources report that Apple is expected to buy 250-270M display panels for iPhones this year. This would be well above the 216M iPhones sold last year, and even higher than the company’s all-time record sales of 231M in 2015 …

more…

9to5Mac

Cash For Apps: Make money with android app

Health IoT: App helps sports stars predict and manage injuries

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Researchers at the University of Tennessee Chattanooga have developed a platform that measures an athlete’s risk of injury using the Internet of Things (IoT).

The new system could allow athletes at every level, from superstar to hopeful, to create a personal injury risk profile, and manage it from their own smartphones.

Professional athletes live with the knowledge that a serious injury could occur at any moment. Beyond the physical repercussions, these apparent twists of fate can damage successful careers, affect team members or clubs, and have a lasting impact economically and psychologically.

Part of the solution to the ever-present threat of injuries lies in no longer treating them as bad luck, claim researchers. Instead, athletes and their trainers or managers can use new technology to help predict when they might occur.

Using the IoT, researchers at the University of Tennessee Chattanooga have developed a framework to predict and help reduce the risk of injury.

Their research is set out in Mitigating sports injury risks using Internet of Things and analytic approaches, a paper published in the journal Risk Analysis. It explains how screening procedures can help predict the likelihood of an injury using wireless devices and cloud analytics.

Read more: Pyeongchang Winter Olympics to be defended by drone-catching drones

Creating a dashboard for injury risk

Sports injury management, even at a professional level, will always rely on some form of subjective assessment. That might come from the athlete in question, who’s determined to run or play in the next game, despite the pain. Or it might come from a doctor who has to interpret that information and make a split-second decision, while facing commercial or personal pressures.

However, the University of Tennessee Chattanooga researchers have done their best to remove this element from the screening process – or at least to provide as much objective data as possible to minimise the risk.

This greater objectivity is added by combining the athlete’s previous injury history with the results of a number of standardised screening tests. The result is a real-time dashboard providing details of each individual athlete’s status.

Read more: British Athletics deploys digital pace-makers for Rio Olympics

Data, screening, and predictive analytics

The research project was developed in real-world conditions with a team of American footballers.

A month before the players got together for preseason training, information on their previous injuries was collected using a Sport Fitness Index (SFI) survey. Each player then took a Unilateral Forefoot Squat (UFS) test, which assessed their ability to synchronise muscle responses in their legs while holding an upright position.

The researchers used accelerometers built into their smartphones to measure the results. The collected data was then integrated with the athletes’ self-reports of previous injuries and with longitudinal tracking of exposure to game conditions.

In their analysis of the data, the researchers found the ‘red zone’: athletes who played at least eight games were over three times more likely to suffer an injury than those who played fewer than eight games. Of those athletes who exhibited at least one risk factor, 42 percent then sustained an injury.

“Assigning all athletes to a single type of training program, without consideration of an individual’s unique risk profile, may fail to produce a substantial decrease in injury likelihood,” wrote Gary Wilkerson, lead author of the study.

“The results also provide a useful estimation of the odds of injury occurrence for each athlete during the subsequent season.”

Internet of Business says

Moving forward, Wilkerson and his team predict that the prevalence of smartphones and other IoT devices will help to make these and similar screening tests more accessible to athletes at all levels.

Anybody participating in sport could then put all of their data together to identify their own personalised injury risk. A truly smart solution to a painful – and often costly – problem.

Read more: Philips expands healthtech portfolio with IoT, AI, cloud solutions

The post Health IoT: App helps sports stars predict and manage injuries appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

Researchers just taught robots to predict your every move

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND


In a few years time the Droids from Star Wars are going to seem like relics. Today’s robots might be better suited for sewing clothes and building cars, but tomorrow’s could be as indispensable and ubiquitous as our smartphones are. A group of researchers in Europe recently published a white paper unveiling their experiments in teaching robots to anticipate human movements. The team’s work, to create “robots that can predict human actions and intent, and understand human non-verbal cues,” could pave the way for innumerable advances in the field. The researchers focused on combining previous research teaching AI to understand…

This story continues at The Next Web
The Next Web

Cash For Apps: Make money with android app

Using Statistics to Predict the Next World War

It’d be nice if we could foretell the future, wouldn’t it? Predicting the next social media trend could give our business an edge and allow it to thrive in a saturated market. But the closest we can get to some kind of soothsayer magic is learning from the past: looking back at what we know has happened and using that information to make estimated guesses about what might occur — and when.

Statistical analysis of many, many years worth of data (decades, if not centuries’ worth) can help us predict everything from hashtags to flu outbreaks — in fact, researchers have even used Twitter activity to predict flu activity. Researchers are even using historical data to improve weather forecasting and predict natural disasters.

Now, Aaron Clauset, an assistant professor and computer scientist at the University of Colorado, has attempted to predict one of the greatest threats to humanity: war.

An Era of Peace?

It’s been more than 70 years since the last major world war. In fact, both World War I and World War II occurred within just a thirty-year period. Although there has been significant interstate conflict in the decades since, we have not had a global conflict in nearly a century.

Some scholars argue that it’s only a matter of time. Others insist that we are living in an era of peace and that wars of that caliber, while part of our history, will not be part of our future. But is there any way to know for sure? Especially as technological advances have caused the very nature, and definition, of modern warfare to evolve?

The Correlates of War interstate war data ( 30 ) as a conflict time series, showing both severity (battle deaths) and onset year for the 95 conflicts in the period 1823–2003.
The Correlates of War interstate war data ( 30 ) as a conflict time series, showing both severity (battle deaths) and onset year for the 95 conflicts in the period 1823–2003. Credit: Aaron Clauset.

In an attempt to answer these questions, Clauset reviewed data on wars occurring between 1823 to 2003, collected by the Correlates of War Project, an online repository of war-related datasets available to the public. He then created computer models that could help put that data into context.

As he reviewed the data, Clauset paid close attention to what the world was like before, during, and after a long period of conflict. He specifically wanted to find other post-war periods in history during which humanity went decades without another major war. By identifying these periods, he hoped to be able to suss out what, if anything, has set the current seventy-year stretch of global peace apart.

What he found was that, while this period of “peace” may feel remarkable to us, in the vast span of human history, it isn’t even unusual. Indeed, for the post-World War II era of peace to be even significant statistically, it would need to persist uninterrupted for 100-140 years.

We’re not even three-quarters of the way there.

“These results imply that the current peace may be substantially more fragile than proponents believe,” Clauset wrote in his analysis, published in the journal Science Advances. That being said, Clauset’s analysis does point out that war itself is an inherently rare event.

He also asserted that the close proximity of the two world wars, which were periods of incredible violence in the world, was essentially counterbalanced by the sporadic periods of war that have ensued since.

Times between interstate war onsets, 1823–2003.
Times between interstate war onsets, 1823–2003.

“In a purely statistical accounting sense,” Clauset wrote, “the long peace has simply balanced the books relative to the great violence.” He argued that if the books have been balanced, then in terms of predicting when the next major war will occur (statistically speaking, anyway), “the hazard of a very large war would remain constant.”

Clauset anticipated that, while humanity would probably appreciate the ability to predict moderate-to-major conflicts, the bigger question always remains: How far are we from the kind of catastrophic warfare that would put an end to life on Earth?

So Clauset also used his statistical model to predict the timeline of humanity’s ultimate downfall. Accounting for all the variables — changes in the global population, technological advances, and shifting political landscapes — his best guess put humanity’s doomsday anywhere between 383 to 11,489 years from now, with a median of 1,339 years.

He conceded that the probability of such a highly variable event is “likely unknowable.” But Clauset concluded that even if we can’t know for sure, “the prospect of a civilization-ending conflict in the next 13 centuries is sobering.”

The post Using Statistics to Predict the Next World War appeared first on Futurism.

Futurism

Tracking Atmospheric “Rivers” Could Help Us Predict Extreme Weather

Improving Predictions

Atmospheric rivers are long, narrow pieces of the atmosphere that appear like ribbons of water vapor traveling from tropical regions. When they arrive over land, they usually result in rain or snow, making them an essential water source for otherwise deficient areas like Southern California and other drought-ridden regions.

While these rivers are vital to survival and water collection, they can also cause mass flooding in these west coast communities. Given that they can bring about natural disasters, being able to predict when the rivers will make landfall would be of great help. Presently, we’re only able to predict them about two weeks in advance.

Atmospheric rivers. Image Credit: Wikimedia Commons
Atmospheric rivers. Image Credit: Wikimedia Commons

Hoping to improve those predictions, which would give communities more time to prepare, a team of atmospheric scientists at Colorado State University (CSU) developed a model that can predict atmospheric river activity up to five weeks in advance. The study, published in the Nature Partner Journal Climate and Atmospheric Science, is part of an initiative funded by NOAA Research’s MAPP Program and used the team’s careful analysis of 37 years’ worth of weather data.

Whatever the Weather

Cory Baggett, a co-author of the paper and postdoctoral researcher, said the model is “impressive,” considering that even NOAA’s state-of-the-art forecasting system and other models elsewhere in the world can only make such predictions, at most, a week or two ahead of time. Many lives could be saved if local emergency crews and reservoir managers had more of a lead time to prepare for extreme weather events, like droughts or heavy rainfall, that atmospheric rivers can bring about.

While preparing for potential natural disasters is an important benefit of the model, the team also noted that it could improve even routine weather reporting — meaning it could help communities feel better prepared whatever the weather may be.

The post Tracking Atmospheric “Rivers” Could Help Us Predict Extreme Weather appeared first on Futurism.

Futurism

Google’s new AI algorithm can predict heart disease through retina scan

Google has announced its new AI algorithm that can detect heart diseases by retina scanning. Scientists from Google and its subsidiary health tech Verily claim that by deep scanning of a patient’s eye, the newly developed AI software is capable of accurately detecting data including age, blood pressure, and whether or not they smoke and more.  Furthermore, this can also detect the risk of a person suffering a major cardiac event with roughly more or less the same accuracy as current leading methods. the algorithm is said to make it simple for doctors to analyze a patient’s cardiovascular risk since it doesn’t require a blood test. However, even before this software is implemented, it needs to be tested for accuracy and perfected. Google says that using the deep learning algorithms trained on data from over 284,335 patients, it was able to predict CV risk factors from retinal images which also included eye scans as well as general medical data.The company claims that the algorithm was able to distinguish the retinal images of a smoker from that of a non-smoker 71% of the time. Neural networks were used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics …
Fone Arena

Fitbit and Apple Watch can help predict diabetes risk, study reveals

DeepHeart: Fitbit and Apple Watch can help predict diabetes risk

Smart watches just got smarter, according to a new study of the use of wearables to predict the risk of medical conditions, including diabetes, high cholesterol, and high blood pressure.

An AI neural network, known as DeepHeart, is the brains behind the breakthrough.

Research from digital heart-rate tracking company Cardiogram has revealed the latent potential in consumer heart rate trackers, such as those found in Fitbit and Apple Watch devices, to detect signs of cardiovascular illnesses. They presented their findings at this week’s AAAI Conference on Artificial Intelligence in New Orleans.

By analysing the relationship between the heart rate and step counting data recorded by compatible wearables, Cardiogram was able to predict whether the participants had diabetes, with 85 percent accuracy.

Alongside diabetes risk, the research, carried out in partnership with the University of California, sought to train the company’s DeepHeart neural network to predict high cholesterol, high blood pressure and sleep apnea.

The study compared two semi-supervised training methods, sequence learning and heuristic pretraining, and successfully demonstrated that these methods can outperform traditional hand-engineered biomarkers.

The DeepHeart neural net

Existing (and widely used) predictive models rely on very small amounts of positive labels (which represents a ‘human life at risk’). However, readily available wearables such as Apple Watch, Fitbit, and Android Wear devices, benefit from trillions of unlabelled data points – including rich signals such as resting heart rate and heart rate variation, which correlate with many health conditions. As an individual develops diabetes, their heart rate pattern changes, due to the heart’s link with the pancreas, via the autonomic nervous system.

Utilising consumer heart rate trackers offers a rich vein of data with which to train a neural network. This kind of AI thrives on huge quantities of information, as seen in natural language processing algorithms from the likes of Amazon and Google.

The research was not straightforward, however. Tracking company Cardiogram had to overcome several challenges presented by consumer-grade devices, including sensor error, variations in the rate of measurement, and daily activities confusing the data.

The company is now planning to launch new features within its app for iOS and Android, incorporating DeepHeart.

Internet of Business says…

We’ve touched on the wealth of data that healthcare providers could potentially tap into when it comes to wearables, such as the KardiaBand. This example requires supplementary hardware, however. With DeepHeart’s intelligent use of neural network methods, they have opened the door to healthcare professionals being able to make use of the persistent monitoring capabilities of consumer wearables.

With an estimated 100 million-plus US adults now living with prediabetes or diabetes, many of whom aren’t aware of having the condition, Cardiogram’s study has significant practical implications. This is magnified by the fact that one-in-five Americans own a heart rate sensor today, so the infrastructure is already there to deploy DeepHeart’s technology quickly. With rumours that Apple is considering including a glucose monitor in it’s next smart watch, the scope for using data from consumer wearables is set to grow even further still.

The likely determining factor in adoption will be the rate of deployment. Hospitals are typically slow to adopt new AI technologies because the cost of errors is so high.

A word of warning, too, we’ve already seen the danger of using ‘black box’ AI systems in our finance and justice systems – the dangers of using similarly opaque methods in healthcare are just as acute.

Read more: Police need AI help with surge in evidential data

The post Fitbit and Apple Watch can help predict diabetes risk, study reveals appeared first on Internet of Business.

Internet of Business

IBM’s New AI Can Predict Psychosis in Your Speech

Using Words

Language is a fascinating tool, one that allows humans to share thoughts with one another. Often enough, if used with clarity and precision, language leads to an accord of minds. Language is also the tool by which psychiatrists evaluate a patient for particular psychoses or mental disorders, including schizophrenia. However, these evaluations tend to depend on the availability of highly trained professionals and adequate facilities.

Enter a team comprising members of IBM Research’s Computational Psychiatry and Neuroimaging groups and universities around the globe.

Together, they’ve developed an artificial intelligence (AI) capable of predicting with relative precision the onset of psychosis in a patient, overcoming the aforementioned evaluation barriers. Research on their psychosis-predicting AI has been published in the journal World Psychiatry.

The group built on the findings of a 2015 IBM study demonstrating the possibility of using AI to model the differences in speech patterns of high-risk patients who later developed psychosis and those who did not. Specifically, they quantified the concepts of “poverty of speech” and “flight of ideas” as syntactic complexity and semantic coherence, respectively, using an AI method called Natural Language Processing (NLP).

Their AI then evaluated the speech patterns of patients that researchers instructed to talk about themselves for an hour.

“In our previous study, we were able to construct a predictive model with the manual scores that reached 80 percent accuracy, but the automated features achieved 100 percent,” Guillermo Cecchi, lead researcher and manager of the Computational Psychiatry and Neuroimaging groups at IBM Research, told Futurism.

Understanding Machine Learning [INFOGRAPHIC]
Click to View Full Infographic

For their new study, the researchers evaluated a much larger patient group that engaged in a different kind of speech activity: talking about a story they’d just read. By training their psychosis-predicting AI using what they’d learned from the 2015 study, the team was able to build a retrospective model of patient speech patterns, said Cecchi.

According to the study, this system could’ve predicted the eventual onset of psychosis in patients with an 83 percent accuracy. Had it been applied to the patients from the first study, the AI would’ve predicted with 79 percent accuracy which patients eventually developed psychosis.

An AI Shrink?

The IBM researchers’ psychosis-predicting AI could eventually help mental health practitioners as well as patients. As Cecchi wrote in a 2017 IBM Research post, traditional approaches to evaluating patients are quite subjective. He and his team believe that using AI and machine learning as tools for so-called computational psychiatry could eliminate this subjectivity and improve the chances of accurate assessments.

This new study is just one of a couple of IBM Research’s computational psychiatry efforts. Earlier in 2017, Cecchi’s team and researchers from the University of Alberta conducted a study through the IBM Alberta Center for Advanced Studies. That particular work combined neuroimaging techniques with AI in order to predict schizophrenia by analyzing a patient’s brain scans.

As for the new study, Cecchi believes that it could be a significant step toward making neuropsychiatric assessment available to the broader public, and improved diagnosis at the onset of psychosis could lead to improved treatment.

“This system can be used, for instance, in the clinic. Patients considered at-risk could be quickly and reliably triaged so that the (always-limited) resources can be devoted to those deemed very likely to suffer a first episode of psychosis,” Cecchi told Futurism. People without access to specialized professionals or clinics could send in audio samples for remote evaluation by the psychosis-predicting AI.

As Cecchi told Futurism, the approach needn’t be limited to psychosis, either. “Similar approaches could be implemented in other conditions, for example, depression,” he said. Indeed, IBM Researchers are already exploring the potential of computational psychiatry to aid in the diagnosis and treatment of other conditions, including depression, Parkinson’s and Alzheimer’s diseases, and even chronic pain.

AI is truly revolutionizing medicine, and as these advanced systems reach the mainstream, we’ll enter a new era in healthcare, and hopefully, it’ll be one in which anyone, anywhere, has access to the best diagnosis and treatment options.

The post IBM’s New AI Can Predict Psychosis in Your Speech appeared first on Futurism.

Futurism