Your iPhone is a content creation machine, especially for photo and video. That’s because it sports powerful cameras and image processing abilities. But it’s also a phone, and aren’t built like cameras with the sole purpose of making great images. So we’ve rounded up some of the top accessories you’ll need to make your iPhone […]
Developer Shallot Games released Vista Golf [Free] back in July of last year, and it stuck out in the sea of similar mini-golfing games with its clean visuals, smart controls, and weekly rotation of new courses. Unfortunately, being the work of a lone developer, real life obligations meant that contract work put off some of the plans they had for Vista Golf for some point in the future. Now with some more free time to work on the game, Shallot is readying a significant update to Vista Golf which will include a very user-friendly level creator. Players will be able to create and share levels online for potentially limitless mini-golfing fun. Check out our hands-on demo of the Vista Golf level editor and look for a forthcoming call for beta testers soon and for the update to go live most likely this summer.
Plenty of companies are talking about artificial intelligence and machine learning today in vague, disconnected terms. It will certainly influence our strategy; not sure how, but everything’s coming up AI, right?
As a pleasant antidote to all that bluff and bluster, how about this from John Stone, senior vice president of the Intelligent Solutions Group at agricultural manufacturing giant John Deere? “AI and machine learning is going to be as core to John Deere as an engine and transmission is.”
Make no mistake about it, these are certainly exciting times for the 180-year-old Deere & Company. The company has in the past several months acquired Blue River Technology, a machine learning-centric startup, as well as opened up a lab in the heart of Silicon Valley.
Yet this is just the way things have been done for some time at the company – it’s just the technology has changed with it.
Than Hartsock, director of precision agriculture solutions at John Deere, has been involved with the company for much longer than his almost 17-year tenure, having grown up on a commercial grain farm in Ohio. In the late 1990s, his education – Hartsock has degrees in soil and crop science – involved working on projects around soil sensing technologies. Deere acquired NavCom Technology, a provider of global navigation satellite system (GNSS) technology, at around the same time. “It was clear, even when I was in high school, that John Deere was uniquely committed to precision agriculture,” says Hartsock.
It was the Internet of Things long before anyone came up with a proper name for it. Yet this initial investment translates to a serious advantage for the company today. “Those early investments have allowed us to, I would say, position the integration of those components into our equipment into our machines, across machines, and into our dealerships,” explains Hartsock. “It went from ‘okay, this is something Deere is doing [and] it may not be completely clear why we’re doing it’, [and] now it’s at the forefront of our company. It’s how we think about our value proposition to the industry, to farmers, crop producers, and customers.”
No stone is left unturned, no crop is left unfurled – and this is where Blue River comes in. The company provides what it calls ‘see and spray’ technology, which utilises machine learning to process, in real-time, images of weeds and crops and tell the sprayer what and where to spray. It makes for a vast improvement on anything a human can do – but it remains important to keep human expertise.
“Farmers, and their advisors and contractors – these are individuals that bring decades and generations of knowledge about the practices, about the land that they farm,” says Hartsock. “The way we see it is the technology – even artificial intelligence and machine learning – provides them the tools to essentially extend and scale their knowledge.
“Imagine the smart spraying scenario… you could imagine an agronomist, a farmer needing to come into that field ahead of time,” Hartsock adds. “What’s the state of the crop? How much input do I want to invest in this crop at this stage? The machine is going to be able to discern between weeds and crops, but I need to decide economically, agronomically, how much I want to invest.”
Hartsock will be speaking at IoT Tech Expo Global in London on April 18-19, discussing how agriculture has become a prime example of optimising on connected technologies. Inside the industry technological advancement has never been clearer – but what about outside it?
Take self-driving cars as an example. You can’t move for hype and headlines around them, but what can they actually do today? Compared to a smart tractor, one can argue it’s mostly child’s play – and Hartsock wants to make clear how smarter machines and the IoT have ‘infiltrated’ agriculture.
“When you look at a planter and a tractor, in many cases, nearly all cases, that planter or that seeder will have a sensor on every row that’s measuring every seed and every row that’s dropped into the soil,” says Hartsock. “It will have a sensor that measures the motion of the planter row unit to make sure the row unit is keeping in close contact with the soil, and if it’s not maintaining contact, the sensor informs an actuator to apply more pressure to the row unit.
“That’s just the planter,” he adds. “The tractor is equipped with many sensors around the engine and transmission, and then that tractor, like most of our large ag machines, is equipped with a 4G modem that then provides connectivity between those sensors and data that’s being acquired, and then connected to the cloud.
“Once the data gets to the cloud we give the user, the farmer, the contractor, the authority over the data to dictate control and share with other partners and other companies,” Hartsock says. “You really then have this ecosystem that evolves, develops, for usage of the data… all generated out of the work that’s being done in the field by that smart machine.”
A report out of Asia on Sunday claims Apple is preparing a new entry level 13.3-inch MacBook offering that boasts a screen resolution comparable to a MacBook Pro, but comes with a price tag closer to that of a MacBook Air.
AppleInsider – Frontpage News
Since Siri’s introduction in the iPhone 4s in 2011, responses to Apple’s AI assistant have often weighed towards the unfavorable side, most recently in several HomePod reviews that specified Siri as one of the biggest downsides of owning the speaker. This week, Siri creator, co-founder, and former board member Norman Winarsky added in his own commentary about the assistant’s current state, saying that he didn’t think this is where Siri would be at this point (via Quartz).
In 2008 Siri began as spin-off of SRI International, where Winarsky was the President, and eventually launched as an app for iOS in February 2010. Two months later Apple acquired Siri, and just over a year after that introduced it within the iPhone 4s, shutting down the standalone app shortly thereafter. Seven years later, Winarsky said that Siri’s capabilities have fallen short of his earlier predictions for where he thought the assistant, and Apple’s development, would end up.
Specifically, Winarsky’s comments focus on what Siri’s intention was “pre-Apple” versus where the assistant is today. According to the co-founder, Siri was originally meant to be incredibly intelligent in just a few key areas — travel and entertainment — and then “gradually extend to related areas” once it mastered each. Apple’s acquisition pivoted Siri to an all-encompassing life assistant, and Winarsky said that this decision has likely led Apple to search “for a level of perfection they can’t get.”
But part of it is also likely because Apple chose to take Siri in a very different direction than the one its founders envisioned. Pre-Apple, Winarsky said, Siri was intended to launch specifically as a travel and entertainment concierge. Were you to arrive at an airport to discover a cancelled flight, for example, Siri would already be searching for an alternate route home by the time you pulled your phone from your pocket—and if none was available, would have a hotel room ready to book.
It would have a smaller remit, but it would learn it flawlessly, and then gradually extend to related areas. Apple launched Siri as an assistant that can help you in all areas of your life, a bigger challenge that will inevitably take longer to perfect, Winarsky said. […] “These are hard problems and when you’re a company dealing with up to a billion people, the problems get harder yet,” Winarsky said. “They’re probably looking for a level of perfection they can’t get.”
Last September, Apple VP of marketing Greg Joswiak commented on a few aspects of Siri’s development, stating that Apple’s aim from the beginning has been to make Siri a “get-s**t-done” machine. Joswiak did a series of interviews around the same time last September, after Siri leadership moved to Craig Federighi and before the assistant’s six year birthday. In one, he discussed the claim that Siri development has been hindered by Apple’s commitment to privacy, describing these reports as “a false narrative.”
Winarsky didn’t specifically comment on Apple’s focus on privacy and how that could be a factor in Siri’s development, but he did state that there’s one simple factor absent from Siri today: “Surprise and delight is kind of missing right now.”
Discuss this article in our forums
Siege of Dragonspear is available right now on the App Store. It’s a stand-alone game that fills some of the story gaps between Baldur’s Gate and Baldur’s Gate 2. It’s also the first bit of official new content for the series in more than a decade.
Netflix today announced that it is updating its parental controls to create more detailed protection for young viewers. The service is also adding clearer maturity level rating labels to its content.
When the weather is warm enough, I like to go for a quick run up two to three times a week. It’s somewhat meditative. It allows me to focus on the day ahead, and my Apple Watch lets me see how many steps I’ve run. It’s nice.
I’m not alone. Fitness trackers like FitBit and Jawbone have been on wrists for year. Consumers rely on them and the ecosystem of associated apps to meet fitness goals. And yet, these devices often fall short of identifying actionable health insights, such as risk factors for diabetes and heart disease.
Companies like Apple plan to change that. Last year, the company partnered with Stanford to bring diabetes testing to its smartwatch. Even more recently, Apple joined forces with a startup called Cardiogram to find meaningful ways to use information on irregular heart rates. Eventually, this information could be used to detect diabetes, hypertension, sleep apnea, and atrial fibrillation.
But Apple isn’t the only company innovating the wearables space. Here are some ways wearables will help consumers take charge of their own health in the very near future.
Heart Rate Health Monitoring
Heart rate can offer valuable insights into a person’s health. An accelerated heart rate is a sign of an impending heart attack, for instance, and an irregular heartbeat can signal a variety of concerning conditions. Although many of today’s fitness wearables provide heart rate tracking, consumers are still unsure how to put the information to use.
The next generation of wearables will address these shortcomings. The iBeat smartwatch will not only monitor a wearer’s heart rate, but it also includes a help button that connects to a 24/7 response center. Jawbone has been ramping up to shift into medical tracking, and the latest models of its fitness bracelet are an important first move. The Jawbone UP3 and UP4 monitor your heart rate, but they also give you information on what those metrics mean for your health.
Patient Data Monitoring
Patients spend time in the hospital hooked up to machines that monitor vital signs around the clock. Once a patient leaves the clinical setting, however, medical professionals no longer have a way to monitor them. Technology like that being developed by MYIA Labs uses a combination of under-bed sensors and apps to track your heart rate and respiratory rate while you sleep. This information is collected and used to monitor chronic conditions like Congestive Heart Failure (CHF). Another wearable sensor is the Kardia Band, which can note an abnormal heart rate or signs of atrial fibrillation, which can lead to issues like blood clots and stroke. Once detected, the app sends an email to either you or your doctor so you can take action.
Few conditions are as ideal for patient monitoring as diabetes is. Patients suffering from the disease must keep constant watch on their glucose levels. This has traditionally required drawing blood through a finger prick. Wearables bring the possibility of monitoring those levels without drawing blood. Diabetes Sentry, which tracks a patient’s skin temperature and perspiration levels to detect signs of a drop in blood glucose levels. The alert signals it may be time for treatment.
In addition to monitoring for health problems, fitness wearables will still do what they were originally designed to do. However, they will become more advanced in the health data they provide. The Polar A370 fitness tracker not only measures your activity. It also provides guidance on what you can do to improve your workout routine. Like others, this wearable tracks sleep activity. It also asks the wearers how they’re feeling each day to put that information to use in offering insights.
For those who aren’t interested in wearing bracelets, watches, or patches, SPIRE has a tag that clips onto your clothing. Once in place, the wearable takes tracking to the next level. It starts offering insights to help reduce stress levels, sleep better, and be more active. Trackers are even being built into clothing items like sports bras and underwear. This helps to monitor people without forcing them to wear a band or watch.
Technology innovators are envisioning a time when no one will be surprised by a heart attack or stroke. At the same time, companies across the globe are making it easier for chronic patients to enjoy around-the-clock care. All this from the comfort of their home. More than just fitness trackers, these sensors are revolutionizing healthcare.
The post How Wearables Will Take Health Monitoring to the Next Level appeared first on ReadWrite.
When it comes to the future of clean and safe transportation, all bets seem to be on electric autonomous vehicles. These combine two of today’s most advanced technologies — electric motors and self-driving software. While both have seen much improvement, there’s still a lot of room for further development.
Which is why not every carmaker is particularly keen on the regular electric motor to power their next-generation driverless vehicles. One such car manufacturer is South Korea’s Hyundai, which unveiled the Nexo at this year’s Consumer Electronics Show (CES).
A crossover SUV that runs on hydrogen fuel, the Nexo has a range of approximately 800 km (500 miles) and is capable of a full refuel in only three to five minutes. When it comes out this March in Korea, refueling would mean taking the Nexo to dedicated Hydrogen Refueling Stations.
The Nexo comes with semi-autonomous technology that Hyundai promises will be advanced to Level 4 autonomy by 2021. That might not be much of a stretch, though, considering the Nexo’s recent driving demonstration performance earlier this February.
According to reports, the Nexo SUV set a record for autonomous driving on a highway when it completed 190 km (118 miles) of highway on full “cruise” mode. The stretch was managed by three Nexo SUVs and two Genesis G80s — from Hyundai’s luxury brand — outfitted with self-driving systems that follow Level 4 autonomy standards as described by the Society of Automotive Engineers (SAE).
It’s reportedly the first time a self-driving vehicle traveled more than 100 km (62 miles) at the maximum allowable speeds of up to 110 km/h (68 mph). All the while, the vehicles successfully overtook slower vehicles, changed lanes, and used automated toll gates — all without human intervention.
“We conducted a significant number of highway test drives amounting to hundreds of thousands of kilometers traveled, which enabled them to accumulate a vast amount of data that helped enhance the performance of our self-driving vehicles,” Hyundai said, a local news outlet reports.
This kind of performance demands more than the typical electric car battery, Hyundai global’s vice chairman Chung Eui-sun told CarAdvice at CES. He explained how vehicles with Level 4 autonomy (as well as Level 5) would require energy that could power the vehicle’s onboard processing computer while it handles 200-300 terabytes of data. “[Pure] electric vehicle battery is not enough for that, so maybe fuel cell can cover that amount of data processing,” explained Eui-sun.
Best of all, the only “waste” from hydrogen fuel-powered vehicles is water vapor, which could be collected and stored for later use.
The technology isn’t exactly new, although uptake has been rather slow because of particular hurdles. Hyundai developed their first hydrogen fuel cell engine in 1998, and has since worked on perfecting the technology. Now, alongside Hyundai, other carmakers are looking at the technology again for developing cleaner vehicles.
The post Hyundai’s Hydrogen-Powered, Self-Driving SUV Runs on Level 4 Autonomy appeared first on Futurism.
Apple’s native Camera app in iOS 11 has plenty of tools for helping you get the right shot, but some are more hidden than others. The camera level is the perfect example of a really handy tool that many users don’t even know exists, mainly because it’s part of a feature that’s turned off by default.
If you tend to take a lot of photos from an overhead point of view, like a picture of a meal on a table, or an object lying on the floor, then you’ll want to use the camera level, as it helps you capture a balanced shot without having to use a tripod arm or mount. It’s also useful for taking shots of scenes directly above you, such as in the sky or on the ceiling.
Here’s how to enable and use it on iOS 11.
How to Enable the Camera Level on iPhone and iPad
The camera level tool is part of the Grid overlay, which is useful in itself for applying the rule of thirds in your pictures for more balanced compositions. First then, you need to turn on Grid mode.
- Open the Settings app on your iOS device.
- Scroll down the list and tap Camera.
- Toggle on the switch next to Grid.
How to Use the Camera Level on iPhone and iPad
- Open the Camera app on your iOS device.
- Set the capture mode to Photo, Portrait, Square, or Time Lapse, using the sliding menu above the shutter button.
- Position the camera lens above or below the subject of your photo.
- Line up the floating crosshair with the fixed crosshair in the center of the screen by adjusting the angle of your phone’s camera. The crosshairs will both glow yellow when in perfect alignment.
- Tap the shutter button to capture the shot.
The level tool also comes in handy when scanning documents on a desk with your phone’s camera, but iOS now offers a dedicated scanning feature in the Notes app, so you’ll probably want to use that instead.
Discuss this article in our forums