When it comes to the marine environment, it often seems like we hear nothing but an ocean (sorry) of bad news. One recent report, however, provided a rare bright spot: the number of plastic bags on the seafloor around the U.K. has declined dramatically over the last two decades, according to The Independent. This comes on the tails of worldwide efforts to phase them out in favor of reusable options.
Scientists are the Centre for Environment, Fisheries and Aquaculture Science (Cefas) spotted this trend in data on the types of junk pulled from the UK seafloor over the last 25 years.
“It is encouraging to see that efforts by all of society, whether the public, industry, NGOs or government to reduce plastic bags are having an effect,” lead author Thomas Maes, a Cefas marine litter scientist, told The Independent.
That doesn’t mean the fight against plastic is won, though.
The researchers found the overall amount of marine litter remained roughly constant over the 2.5 decades, as other types of marine debris – such as plastic bottles and lost fishing gear – have increased to fill plastic bags’ gap. Meanwhile, recent research has suggested that the amount of plastic in the ocean is set to triple in the next decade. Those aren’t exactly encouraging statistics.
We have more work to do if we’re going to save our oceans from becoming plastic junkyards (which will, in turn, impact the food chain, and our water supplies). But what’s left of this good fight shouldn’t discourage you.
In fact, the tangible, data-proven decrease in plastic bags should be incredibly encouraging. Every time you actually remembered to bring your canvas bag to the grocery store? That made a difference. But we’ll have to keep at it to ensure that this trend continues.
It’s easy to feel like human impacts on the planet have gotten so bad that there’s nothing we can do to reverse them. Science clearly says that’s not true. With organizations, cities, and even entire countries banning everything from plastic straws to plastic cutlery, we could soon be seeing the tides turn.
But before you set your torch ablaze delete Facebook, let’s take a beat. Is the platform really a toxic monster? Or perhaps more of a misunderstood beneficial beast?
Let’s ask science.
Last month, The Journal of Social Psychologypublished a study exploring the relationship between Facebook and stress. Using 138 active Facebook users as their guinea pigs, researchers from the University of Queensland found that taking a five-day break from the platform lowered levels of the stress hormone cortisol.
“[W]hile participants in our study showed an improvement in physiological stress by giving up Facebook, they also reported lower feelings of well-being,” lead researcher Eric Vanman said in a press release. “People said they felt more unsatisfied with their life and were looking forward to resuming their Facebook activity.”
And those lower cortisol levels? Participants didn’t even notice, reporting that they felt just as stressed as they did before quitting Facebook temporarily.
In some instances, using Facebook can actually help you cope with stress.
That’s according to a study the journal Computers in Human Behavior published in May 2017. Northwestern University researcher Renwen Zhang surveyed 560 Facebook-using university students, focusing on their use of Facebook to disclose information about stressful events in their lives.
Zhang concluded that opening up on Facebook helped the students mentally cope with stressful situations. When the students shared information, they were likely to get support from their Facebook friends in the form of encouragement, advice, or offers of help. This, in turn, made them feel supported, more satisfied with life, and less depressed.
Quitting Facebook means saying goodbye to all those digital hugs that can help you get through your latest breakup or crappy day at work.
Your body is flooded with embalming fluid, your brain is tucked into a freezer, and finally (and this is where researchers are currently a bit fuzzy on the details) science progresses to the point where it can reverse-engineer the human brain.
Voilà! Your digital self lives on forever. One more thing, though: You have to die first.
Not buying it? Neither are the neuroscientists at MIT. They’re calling bullshit on startup Netctome’s plans to backup your consciousness. MIT Media Lab announced it will sever any ties it has with Nectome — including a contract with Media Lab professor Edward Boyden — after the controversial startup caused an outpouring of criticism from the neuroscience community.
“It is so unethical—I can’t describe how unethical it is,” Sten Linnarsson, scientist at the Karolinska Institute in Sweden, told MIT Technology Review. MIT Media Lab also called the idea unrealistic: “Neuroscience has not sufficiently advanced to the point where we know whether any brain preservation method is powerful enough to preserve all the different kinds of biomolecules related to memory and the mind.”
Nectome issued a statement in response, stating that it would only go ahead with the support of the scientific community. “We believe that rushing to apply vitrification today would be extremely irresponsible and hurt eventual adoption of a validated protocol.” Well, that sounds a lot more reasonable.
Regardless, Netcome has amassed a waiting list of volunteers who are ready to get their brains put on ice, collecting $ 200,000 in the process. That amount sounds like a promising start. But we’ll have to see what the startup’s financial future looks like now that MIT has pulled out entirely.
Still, MIT Media Lab isn’t ready to completely discard the idea as a whole: “It’s possible that someday we will be able to simulate, in a computer, neural circuits with great accuracy, based on detailed enough biomolecular maps.”
But, let’s call a spade a spade. The concept of a digital consciousness is science fiction. And literally suggesting to euthanize the terminally ill by pumping their body with embalming fluid is bound to raise some eyebrows — to say the very least. The science isn’t there yet, so let’s all take a chill pill together, and wait for the technology to be ready. And let’s do it all before we ask people to give up their lives for some fantastical idea.
The next time you’re stuck in a mundane traffic jam, find some excitement in your car engine’s secret identity: it’s actually not so different from the exotic exoplanets in our universe.
Seriously. Stay with me here.
French astronomers discovered that computer models used to simulate how car engines emit pollutants could also be used to model hot exoplanet atmospheres.
The planets in question are scorching goliaths. They’re the size of Neptune or Jupiter, but orbit 50 times closer to their star than Earth does the Sun. This gives them hydrogen-rich gaseous atmospheres of 1,000 to 3,000 degrees Celsius (1,832 to 5,431 degrees Fahrenheit), which whip around at speeds of 10,000 kilometers (over 6,000 miles) per hour.
Under such intense (to say the least) conditions, scientists historically had trouble modeling what chemicals might be found in these atmospheres. As the hellishly hot, insanely fast gasses swirl, they interact in unusual ways – creating chemicals that don’t fit the typical models astrophysicists use to simulate planets.
Almost shockingly, these extreme temperature and pressure conditions are not so different from those found in car engines. Car engine pollution models can examine temperatures over 2,000 degrees Celsius, along with a wide range of pressures. This makes them flexible enough to study warm exoplanets, too.
Since 2012, the research team has used these models to create simulations of the atmospheres on hot Jupiters and warm Neptunes, which were then made available to the astrophysics community in an open-access database.
The next step for this research will be to incorporate data from research at particle accelerators, which can provide information on how molecules absorb ultraviolet light at the extreme temperatures of exoplanets — data that was previously only available at room temperature.
“Other fields of research have an important role to play in the characterization of the fantastic diversity of worlds in the Universe, and in our understanding of their physical and chemical nature,” explained Oliva Venot, a lead authors and a researcher at Laboratoire Interuniversitaire des Systèmes Atmosphériques (Interuniversity Laboratory of Atmospheric Systems), in a press release.
These models could help scientists figure out how these far-away exoplanets work without ever being able to reach them. After all, at the moment, our car engines can’t yet transport us out to distant worlds. But they could get us a little closer to understanding them.
“Don’t Be Evil” has been one of Google’s corporate maxims for over 15 years. But it’s recent dealings with the Department of Defense has put that ideal on ice. For some reason, Google’s workers aren’t psyched about this!
Over three thousand Google employees signed a recent public letter demanding CEO Sundar Pichai shut down Project Maven — a Department of Defense contract to create a “customized AI surveillance engine” — and publicize a clear policy that “neither Google nor its contractors will ever build warfare technology.”
The letter’s got some pretty direct language, calling the company out on its loss of the aforementioned core value: “Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.” The commoditization of people’s personal data (ergo, their psyches) not withstanding, obviously.
Gizmodo reported on Project Maven earlier last month, when they described it as “using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts.” Google and the Pentagon fired back, stating that the technology wouldn’t be used to create an autonomous weapons system that can identify targets and fire without a human squeezing the trigger.
CEO Pichai spun the letter and public exchange with the company as “hugely important and beneficial” in a statement to the New York Times, but of course, didn’t refer to any plans to throw the brakes on the project. Pichai’s statement went on to say that the tech used by the Pentagon is available to “any Google Cloud customer” and reserved specifically for “non-offensive purposes.”
Thing is, Google’s far from the only tech industry player in cahoots with the military. Red flags immediately went up when news broke that a team of researchers from the Korea Advanced Institute of Science and Technology (KAIST) was partnering up with weapons company Hanwha Systems — a company that produces cluster bombs, not exactly a popular form of warfare, as far as these things go. Fifty researchers from thirty countries called for an immediate boycott of the Korean institute.
Microsoft and Amazon both signed multi billion dollar contracts with the Department of Defense to develop cloud services. Credit where it’s due: At least the DOD isn’t trying to spin this as anything other than death machine-making. Defense Department chief management officer John Gibson didn’t beat around the bush when he said the collaboration was designed in part to “increase lethality and readiness.”
So that’s fun! And if Google’s recent advancements in AI tech faced a similar fate, think: Weaponized autonomous drones, equipped with private data, and a sophisticated AI. Not saying this is exactly how SkyNet starts, but, this is basically how SkyNet starts.
The counter to this argument, insomuch as there is one, is that these technological developments lead to better data, and better data leads to better object identification technology, which could also lead to more precise offensives, which could lead (theoretically) to less civilian casualties, or at least (again, theoretically) increased accountability on the part of the military (analog: the calculator should make it exponentially more difficult to get numbers “wrong” on your taxes, so the automated hyper-targeted death robots should make it exponentially more difficult to “accidentally” murder a school full of children).
All of which should go without saying that collaboration between the Department of Defense and various Silicon Valley tech companies is a dangerous game, and we have seen how quickly the balance can tilt in one direction. Having informed tech employees call out their CEOs publicly could hopefully lead to tech companies choosing their military contracts more carefully, or at least, more light being shed on who’s making what technologies, or rather, what technologies Silicon Valley coders are unknowingly working on.
More likely is that it just results in these companies being more discreet about the gobstoppingly shady (but profitable!) death machine work they’re doing. Good thing — like the rest of the world with a brain in their heads — we’re all ears.
Between the glowing blue and yellow swirls of distant galaxies, this tiny pinprick of light doesn’t look like much: a white smudge on the infinite black of the universe.
But this tiny speck has enormous significance for astronomers. It’s the most distant star ever seen, affording astronomers a glimpse back in time.
The star, MACS J1149+2223 Lensed Star 1 (more simply known as “Icarus”) was about 9 billion light years away when it emitted the light now reaching Earth. Most other objects spotted at this distance are either galaxies or exploding stars (AKA supernovas), which produce much more light than this distant glimmer.
Thanks to the constant expansion of the universe, Icarus would now be much further away from our planet; by now, it’s probably gone supernova itself, and formed either a black hole or neutron star. (For why we can still view it, though, see #3.)
Here are four things you should know about this distant galactic neighbor, and why we’re just seeing it for the first time.
1. Spotting Icarus was a stroke of good luck
Icarus is so far away that we technically shouldn’t be able to see it: it’s about 100 times further away than the most distant star telescopes have been able to view before now. Fortunately, astronomers got a little bit of help from the universe in spotting it (and the Hubble telescope, props to that).
Icarus was visible because of an astronomical phenomenon called gravitational lensing. In short, the gravity of large, stacked-up celestial objects (in this case, a cluster of galaxies) bend light, creating a magnifying glass-effect for anything behind them. Overall, researchers told The Guardian, Icarus was magnified more than 2,000 times.
Icarus also got a special boost from an extra-magnifying star within the galaxy cluster, making it appear four times brighter over the course of the time the astronomers studied it. Thank you, physics.
2. The star is a blue supergiant
Icarus would be an oddity in the universe — if it were still around. Analysis of the star’s light showed it was a blue supergiant, one of the hottest and highest-mass stars we know of; the blue supergiant Rigel A, the bright left “foot” of the constellation Orion, is 23 times more massive than the sun, and estimated to be several hundred thousand times brighter.
Stars like Icarus and Rigel are rare in the universe today, but in the early universe, they were common; according to io9, most of the early stars were blue supergiants at some point in their lives.
That makes sense, since Icarus’ distant light is actually somewhat like a time machine.
3. Icarus gives a view back in time
The universe is way, way bigger than you can probably comprehend. And because of this astronomical (sorry) size, it can take a really long time for light to reach Earth from the cosmic wilderness. Even traveling at its immense speeds, by the time light from this distant star reached Earth, 9 billion years had passed.
When Icarus released the photons currently hitting the Hubble’s cameras, Earth hadn’t even formed yet — it would be another 4.4 billion years before our solar system even began to coalesce from the dust of the universe. Such distant views of the universe are helping astronomers learn about what the universe was like before our time, even giving us glimpses back to the moments after the Big Bang.
4. The view let scientists test dark matter theory
The Guardian reports that the team also used their view of Icarus to test a theory about dark matter, the mysterious substance that makes up 27 percent of the universe (its counterpart, dark energy, makes up another 68 percent). One theory proposed that dark matter was made of black holes, but what the researchers saw of Icarus didn’t support that theory — looking back at a decade of Hubble images, they didn’t see Icarus’ brightness vary over time. If the black-hole-dark-matter theory was correct, the star would have appeared brighter.
In the coming years, scientists hope to peer even further into our universe’s history with more powerful telescopes, like the James Webb Space Telescope and the Wide Field Infrared Survey Telescope (WFIRST). Recent budget cuts from the White House threatened the future of WFIRST. If the government was unsure just how much these space telescopes could accomplish, this discovery from their predecessor might serve as an apt reminder.
A perfectly styled plate of food, overflowing and untouched. A shot from behind a young woman, naked in the back of a van, looking out at a stunning landscape (#vanlife). Thin, attractive people in expensive workout clothes without a drop of sweat. Endless selfies that purport to be #nofilter and makeup-free while still somehow looking flawless.
Ho, hum, another day of scrolling through Instagram, and all the feelings of inadequacy it brings up. Over time, those feelings could wear on users’ mental health.
Instagram has negative effects on wellbeing, especially among young women, according to severalrecent studies. Britain’s Royal Society for Public Health (RSPH) ranked Instagram as the worst social network for mental health among young people. Now, Quartz reports that Instagram has decided to address these problems by creating a “Wellbeing Team.”
So far, it’s unclear what exactly that team will be doing, or who will be on it. That may be because Instagram is struggling to walk a fine line between helping its users and alienating them.
Yet what Instagram hasn’t done is introduce some of the RSPH’s suggestions, detailed in its study. The research suggested interventions like a pop-up that warns users they’ve been using social media too long, or a watermark that indicates if an image has been digitally altered.
That’s not all that surprising; telling users that they’ve been on an app too long would make being on the app feel like being nagged by a parent, and run counter to the addictive quality that social media companies have worked hard to build into their products. Additionally, if Instagram started calling out users for retouching their photos, they’d likely find their most prolific users ditching the app in a hurry.
Ultimately, the problem here is a paradox: many aspects of Instagram that make people feel terrible are the very things people come to the app to find. Users want to see images of beautiful places and people; they’re coaxed, compelled to compare their lives to others in the hope of reassuring themselves they’re doing alright. Unlike other apps that have faced these issues, such as Facebook, Instagram posts aren’t diluted by status updates, random shared articles, and other types of content; Instagram is built for voyeurism.
We won’t pretend that there are any easy answers to this issue. As Quartz and others have pointed out, this problem stems from a larger, systemic cultural issue — where depression and other mental health issues remain under-addressed, and in which how you look, and how well you fit into cultural expectations of “success,” are given more credence than actual happiness.
Maybe Instagram’s new “Wellbeing Team” will find some innovative ways to chip away at that.
Hey you! Ever wish your technology was more invasive? You love voice-to-text, but it’s just too public?
Some researchers at MIT Media Lab have come up with the perfect gadget for you. And it looks like a Bane mask crossed with a squid. Or, if you prefer: like a horror movie monster slowly encompassing your jaw before crawling into your mouth.
The researchers presented their work at the International Conference on Intelligent User Interfaces (yes such a thing exists) in March in Tokyo.
Whenever you think of words, they’re silently, imperceptibly, transmitted to your mouth. More specifically, signals arrive at the muscles that control your mouth. And those signals aren’t imperceptible to a highly sensitive computer.
The researchers call this device the AlterEgo. It’s got seven electrodes positioned around the mouth to pick up these signals. The data that the electrodes pick up goes through several rounds of processing before being transmitted wirelessly to a device awaiting instruction nearby. Oh, and it’s got bone-conduction headphones so that devices can respond.
The scientists tested their prototype on a few people who trained the software to recognize the data that corresponded to different commands (“call,” “reply,” “add,”), then on a few more to see how accurate it was. The results were promising, though it’s not exactly ready to go into mass production.
The closest comparison to this system is a device you can address in your normal speech, like Siri or Alexa. But, terrifyingly, this is not scientists’ first attempt at creating a more direct way to transmit our thoughts to computers. Most earlier versions have relied directly on brain signals (from devices laid over or implantedin the brain. No thank you).
AlterEgo has the following advantages, according to the researchers:
It’s not invasive (seems like kind of a low bar but ok)
It’s 92 percent accurate (probably marginally better than your average autocorrect, about the same as Siri or Alexa)
It’s portable (and about as sexy as one of those Bluetooth earpieces)
Unlike direct brain readings, it can’t read your private thoughts (except for the ones you quietly mouth to yourself)
I admit, in some situations a device like this might be useful. Particular movements could tell your phone to turn on music, or use a calculator, or text your friend. It could control your “smart home,” turning off the oven or starting the coffeepot with a mere twitch. Heck, in 10 years, I could be thinking this article into existence. This goes double for people with disabilities or vision problems that might make controlling a digital device challenging otherwise.
BUT. But. There are a few things that might make AlterEgo less than ideal. The electrodes can’t shift when a person is using them, for example, or the reading will get all messed up. It’s hard to imagine that people would be comfortable hanging out with a device covering half their mouths. And there’s no telling how the system would do in real-world settings — that’s what the researchers have to test out next. And, of course, there’s the issue of crossed signals, like when Alexa thought random sounds were telling it to laugh. And — just thinking big for a second — if it were hacked, could the hacker use the electrodes to physically control your mouth?
Might we have a future in which our faces butt-dial for us? Who’s to say. But you can bet all the people in my nightmares of a dystopian future are equipped with one of these bad boys.
Here’s a catch-22 for the 21st century: Autonomous vehicles (AVs) will make roads safer by getting fallible human drivers out of the equation. But until AVs are safe enough, we need to rely on fallible human drivers to develop AVs.
In the interim, we might have introduced the most dangerous situation of all: bored humans who are supposed to be paying attention.
In the wake of the first AV-caused pedestrian death, we’re seeing just how big of a problem this can be. Last month, one of Uber’s self-driving cars struck and killed a pedestrian in Arizona. In video footage, the AV clearly doesn’t slow down before hitting the victim, Elaine Herzberg, which experts say could point to problems with Uber’s technology.
Not everyone is just blaming the tech, however. Uber’s AV had a human driver, Rafael Vasquez, behind the wheel at the time of the crash, and both AV experts and the victim’s family have criticized Vasquez for not doing enough to prevent it.
“The driver was eyes down most of the time, indicating complacency and not maintaining proper monitoring,” Missy Cummings, a professor of mechanical engineering and material science at Duke University, told the Wall Street Journal.
“It’s absolutely ridiculous,” Tina Marie Herzberg White, the victim’s stepdaughter, told the Guardian. “I can’t believe that the [driver] that was in the car did not see her.”
But are they expecting too much from AV operators? Or are manufacturers not expecting enough?
One former Uber test driver pointed out the pressures of the position to the WSJ: “The computer is fallible, so it’s the human who is supposed to be perfect. It’s kind of the reverse of what you think about computers.”
Manufacturers expect AV operators to keep constant watch on the road and intervene if the vehicle is about to cause an accident or violate a traffic law. But if the human intervenes too soon, the system’s capabilities aren’t really tested, which draws the ire of engineers. There’s another catch-22 for you, and it’s one that could literally put human lives in harm’s way.
Adding to the general aura of stress around the whole thing: How the public responds to AVs.
AV operators told the WSJ pedestrians would purposely jump out in front of their vehicles to see if they’d stop. Some AV operators have even had people physically assault the cars. Your job might be stressful, but is it “people banging on your office window” stressful?
Oh, and when it’s not stressful, the job is boring. It’s hard enough for regular drivers to resist the urge to daydream. Now imagine resisting that urge when you have nothing to do but stare straight ahead at mile after mile of unspooling road.
So: the job of AV operator is both stressful and boring. But is it actually hard?
Not according to one former Waymo test driver. “It’s about being alert. If you can’t be alert for a few straight hours, then you’re not a very good driver,” they told the WSJ.
AV operators can earn between $ 20 and $ 25 per hour, too, well above the minimum wage in the U.S. With a pretty short list of requirements, the candidate pool should be fairly large then, right? So why was Vasquez, who has multiple traffic citations on his record, operating Uber’s AV?
Apparently, a flawless driving record wasn’t one of Uber’s requirements for employment.
Maybe that’ll change in the wake of fatal incident. But still, it won’t solve the catch-22 we’re currently stuck in. The only way out seems to be that AVs get a lot better, real quick.
That’s how much interest the U.S. government has in regulating gene-edited crops. And while that might sound a dramatic stance, it’s actually just a more formal articulation of a policy that’s been hinted at for years.
Last week, U.S. Secretary of Agriculture Sonny Perdue issued a statement clearing up the U.S. Department of Agriculture’s (USDA’s) stance on crops created using gene-editing techniques such as CRISPR.
This is a pretty significant departure from the USDA’s approach to genetically modified organisms (GMOs). Their names may be similar, but the products themselves are as different as apples and genetically modified oranges.
With gene editing, scientists make simple, precise changes to a crop’s genome, snipping out certain parts or adding in others. Want mushrooms that don’t brown as quickly? Delete the genes that contribute to browning, like researchers at Pennsylvania State University did in 2016. The key is that the gene-edited crops could appear naturally in the wild.
With genetic modification, scientists mix and match genes from different organisms. Want soybeans that are immune to pesticides? Insert DNA from a pesticide-resistant bacteria right into the soybean seeds. Yeah, that’s not happening naturally, even if you get the soybeans and the bacteria super drunk.
The USDA was already taking a hands-off approach to gene-edited foods even before Purdue put it in writing.
In 2016, the department confirmed it wouldn’t regulate those browning-resistant mushrooms— the first time a CRISPR-edited food got the go-ahead from the department. Since then, it’s given another dozen or so crops the same treatment, according to Wired.
Perdue’s statement just makes it official: The USDA has complete faith in the safety of gene-edited crops. In fact, not only does Perdue think gene-editing food is totes safe, he also sounds like a straight-up fanboy.
“Plant breeding innovation holds enormous promise for helping protect crops against drought and diseases while increasing nutritional value and eliminating allergens,” Perdue wrote in the statement. “Using this science, farmers can continue to meet consumer expectations for healthful, affordable food produced in a manner that consumes fewer natural resources. This new innovation will help farmers do what we aspire to do at USDA: do right and feed everyone.”
The confusing rhetoric and rampant misinformation surrounding GMOs? There’s no need for that with gene-edited crops. That’s good news for consumers.
And it’s not too shabby for gene-editing researchers, either. They may have suspected the USDA wouldn’t oppose their work, but now they can be certain. Confidence in regulatory smooth sailing may help scientists secure funding for further research.
Once a gene-edited product is ready for stores, it won’t bear any special identifying marker, such as the labels on GMO’d foods. Those can function like a scarlet letter, keeping supermarkets and consumers away from foods that are in many ways superior to their conventionally-grown counterparts.
So get ready for gluten-free wheat, bigger tomatoes, and (yup) those non-browning mushrooms, to hit store shelves. You just might not know gene-editing had a hand in their creation. And, as the USDA statement indicates, you don’t need to, either.