MAKING OUR GREEN ECONOMY COME ALIVE AND THRIVETM

NEWS

ENVIRONMENTAL NEWS

Remember to also click on and check out the "Green Local 175 News" for environmental and green news within a 175 mile radius of Utica-Rome. Organizations and companies can send their press releases to info@greenlocal175.com . Express your viewpoint on a particular issue, by sending an e-mail to our opinion page, "Urge to be Heard". Lastly, for more environmental information listen to our Green Local 175 Radio & Internet Show.

Green Local 175 on ,

Melting of Arctic mountain glaciers unprecedented in the past 400 years

PUBLIC RELEASE: 10-APR-2018

AMERICAN GEOPHYSICAL UNION

WASHINGTON D.C. -- Glaciers in Alaska's Denali National Park are melting faster than at any time in the past four centuries because of rising summer temperatures, a new study finds.

New ice cores taken from the summit of Mt. Hunter in Denali National Park show summers there are least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) warmer than summers were during the 18th, 19th, and early 20th centuries. The warming at Mt. Hunter is about double the amount of warming that has occurred during the summer at areas at sea level in Alaska over the same time period, according to the new research.

The warmer temperatures are melting 60 times more snow from Mt. Hunter today than the amount of snow that melted during the summer before the start of the industrial period 150 years ago, according to the study. More snow now melts on Mt. Hunter than at any time in the past 400 years, said Dominic Winski, a glaciologist at Dartmouth College in Hanover, New Hampshire and lead author of the new study published in the Journal of Geophysical Research: Atmospheres, a journal of the American Geophysical Union.

The new study's results show the Alaska Range has been warming rapidly for at least a century. The Alaska Range is an arc of mountains in southern Alaska home to Denali, North America's highest peak.

The warming correlates with hotter temperatures in the tropical Pacific Ocean, according to the study's authors. Previous research has shown the tropical Pacific has warmed over the past century due to increased greenhouse gas emissions.

The study's authors conclude warming of the tropical Pacific Ocean has contributed to the unprecedented melting of Mt. Hunter's glaciers by altering how air moves from the tropics to the poles. They suspect melting of mountain glaciers may accelerate faster than melting of sea level glaciers as the Arctic continues to warm.

Understanding how mountain glaciers are responding to climate change is important because they provide fresh water to many heavily-populated areas of the globe and can contribute to sea level rise, Winski said.

"The natural climate system has changed since the onset of the anthropogenic era," he said. "In the North Pacific, this means temperature and precipitation patterns are different today than they were during the preindustrial period."

Assembling a long-term temperature record

Winski and 11 other researchers from Dartmouth College, the University of Maine and the University of New Hampshire drilled ice cores from Mt. Hunter in June 2013. They wanted to better understand how the climate of the Alaska Range has changed over the past several hundred years, because few weather station records of past climate in mountainous areas go back further than 1950.

The research team drilled two ice cores from a glacier on Mt. Hunter's summit plateau, 13,000 feet above sea level. The ice cores captured climate conditions on the mountain going back to the mid-17th century.

The physical properties of the ice showed the researchers what the mountain's past climate was like. Bands of darker ice with no bubbles indicated times when snow on the glacier had melted in past summers before re-freezing.

Winski and his team counted all the dark bands - the melt layers - from each ice core and used each melt layer's position in the core to determine when each melt event occurred. The more melt events they observed in a given year, the warmer the summer.

They found melt events occur 57 times more frequently today than they did 150 years ago. In fact, they counted only four years with melt events prior to 1850. They also found the total amount of annual meltwater in the cores has increased 60-fold over the past 150 years.

The surge in melt events corresponds to a summer temperature increase of at least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) relative to the warmest periods of the 18th and 19th centuries, with nearly all of the increase occurring in the last 100 years. Because there were so few melt events before the start of the 20th century, the temperature change over the past few centuries could be even higher, Winski said.

Connecting the Arctic to the tropics

The research team compared the temperature changes at Mt. Hunter with those from lower elevations in Alaska and in the Pacific Ocean. Glaciers on Mt. Hunter are easily influenced by temperature variations in the tropical Pacific Ocean because there are no large mountains to the south to block incoming winds from the coast, according to the researchers.

They found during years with more melt events on Mt. Hunter, tropical Pacific temperatures were higher. The researchers suspect warmer temperatures in the tropical Pacific Ocean amplify warming at high elevations in the Arctic by changing air circulation patterns. Warmer tropics lead to higher atmospheric pressures and more sunny days over the Alaska Range, which contribute to more glacial melting in the summer, Winski said.

"This adds to the growing body of research showing that changes in the tropical Pacific can manifest in changes across the globe," said Luke Trusel, a glaciologist at Rowan University in Glassboro, New Jersey who was not connected to the study. "It's adding to the growing picture that what we're seeing today is unusual."

The American Geophysical Union is dedicated to advancing the Earth and space sciences for the benefit of humanity through its scholarly publications, conferences, and outreach programs. AGU is a not-for-profit, professional, scientific organization representing 60,000 members in 137 countries. Join the conversation on Facebook, Twitter, YouTube, and our other social media channels.

Food packaging could be negatively affecting nutrient absorption in your body

PUBLIC RELEASE: 9-APR-2018

CREDIT: BINGHAMTON UNIVERSITY, STATE UNIVERSITY AT NEW YORK

BINGHAMTON, N.Y. - Food packaging could be negatively affecting the way in which your digestive tract operates, according to new research by faculty and students at Binghamton University, State University at New York.

"We found that zinc oxide (ZnO) nanoparticles at doses that are relevant to what you might normally eat in a meal or a day can change the way that your intestine absorbs nutrients or your intestinal cell gene and protein expression," said Gretchen Mahler, associate professor of bioengineering.

According to Mahler, these ZnO nanoparticles are present in the lining of certain canned goods for their antimicrobial properties and to prevent staining of sulfur-producing foods. In the study, canned corn, tuna, asparagus and chicken were studied using mass spectrometry to estimate how many particles might be transferred to the food. It was found that the food contained 100 times the daily dietary allowance of zinc. Mahler then looked at the effect the particles had on the digestive tract.

"People have looked at the effects of nanoparticles on intestinal cells before, but they tend to work with really high doses and look for obvious toxicity, like cell death," said Mahler. "We are looking at cell function, which is a much more subtle effect, and looking at nanoparticle doses that are closer to what you might really be exposed to."

"They tend to settle onto the cells representing the gastrointestinal tract and cause remodeling or loss of the microvilli, which are tiny projections on the surface of the intestinal absorptive cells that help to increase the surface area available for absorption," said Mahler. "This loss of surface area tends to result in a decrease in nutrient absorption. Some of the nanoparticles also cause pro-inflammatory signaling at high doses, and this can increase the permeability of the intestinal model. An increase in intestinal permeability is not a good thing -- it means that compounds that are not supposed to pass through into the bloodstream might be able to."

Although Mahler studied these effects in the lab, she said she is unsure what the long-term health implications might be.

"It is difficult to say what the long-term effects of nanoparticle ingestion are on human health, especially based on results from a cell culture model," said Mahler. "What I can say is that our model shows that the nanoparticles do have effects on our in vitro model, and that understanding how they affect gut function is an important area of study for consumer safety."

The researchers are looking at how an animal model (chickens) responds to nanoparticle ingestion.

"We have seen that our cell culture results are similar to results found in animals and that the gut microbial populations are affected. Future work will focus on these food additive-gut microbiome interactions," said Mahler.

This is the first research that analyzes how ZnO nanoparticles affect the human body. The study was done by Mahler, Fabiola Morena-Olivas, a graduate student studying biomedical engineering, and their collaborator Elad Tako from the Plant, Soil and Nutrition Laboratory, Agricultural Research Services, U.S. Department of Agriculture, Ithaca, N.Y. The research is funded by the National Institute of Environmental Health Sciences.

The study, "ZnO nanoparticles affect intestinal function in an in vitro model," was published in the journal Food and Function.

Palm trees are spreading northward - how far will they go?

New York NY (SPX) Mar 27, 2018

What does it take for palm trees, the unofficial trademark of tropical landscapes, to expand into northern parts of the world that have long been too cold for palm trees to survive? A new study, led by Lamont-Doherty Earth Observatory researcher Tammo Reichgelt, attempts to answer this question. He and his colleagues analyzed a broad dataset to determine global palm tree distribution in relation to temperature.

"In our paper, we draw a fully quantitative line in the sand and ask, 'How cold is too cold for palms?'" said Reichgelt.

Reichgelt and co-authors David Greenwood from Brandon University and Ph.D. student Christopher West from the University of Saskatchewan launched the study to investigate how plants will redistribute as climate zones shift. This is important for predicting how landscapes and ecosystems will evolve. Palms are particularly interesting to the researchers because they cannot propagate in freezing temperatures.

"Palms are therefore sensitive indicators of changing climates, both in the remote geological past and in the present day," said Greenwood.

There are signs that palms have already begun flourishing in untraditional settings at higher latitudes. One study found them in the foothills of the Swiss Alps, after a decorative palm escaped cultivation into the mountains; it spread simply because frost is not as prevalent as it used to be.

The new study, published in Nature's Scientific Reports, concludes that the absolute limit of palm distribution depends on the average temperature of a region's coldest month, which has to be above 2 degrees Celsius or 36 degrees Fahrenheit. The findings offer a glimpse into the possible effects of climate change; as climate zones shift northward, plant habitats might, too.

"As an example, this means that at present, Washington DC is just a little too cold (34 degrees F in January) for palms to successfully propagate in the wild, but that you can expect range expansion in the coming decades as average winter temperatures warm up," said Reichgelt.

The findings also help retrace Earth's past climates. The study found that the mere presence of palms in the fossil record indicates that past temperatures remained at or above a minimum possible amount (at least 2 to 5 degrees C).

"A palm tree conjures up images of the tropics," said Reichgelt. "But palm trees weren't always confined to the tropical places."

Furthermore, the researchers found out that the temperature tolerance range of palm trees strongly depends on its evolutionary heritage. The specific palm species and its place on the palm family phylogenetic tree determines its minimum cold tolerance.

"If you find a palm fossil and can determine its affinity to a modern subgroup of the palm family, you can, using our data, determine the temperature of the climate of when that palm was growing," explained Reichgelt.

In reconstructions past climates, the presence of palms is usually considered indicative of warm, equable climate conditions. Reichgelt says palm fossils have been identified from the Antarctic more than a 50 million years ago and that, among other things, has led researchers to call the Antarctic at that time "near-tropical."

First direct observations of methane's increasing greenhouse effect at the Earth's surface

by Staff Writers

Berkeley UK (SPX) Apr 03, 2018

Scientists have directly measured the increasing greenhouse effect of methane at the Earth's surface for the first time. A research team from the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) tracked a rise in the warming effect of methane - one of the most important greenhouse gases for the Earth's atmosphere - over a 10-year period at a DOE field observation site in northern Oklahoma.

These findings were published online April 2 in the journal Nature Geoscience in an article entitled "Observationally derived rise in methane surface forcing mediated by water vapour trends." The paper indicates that the greenhouse effect from methane tracked the global pause in methane concentrations in the early 2000s and began to rise at the same time that the concentrations began to rise in 2007.

"We have long suspected from laboratory measurements, theory, and models that methane is an important greenhouse gas," said Berkeley Lab Research Scientist Dan Feldman, the study's lead author. "Our work directly measures how increasing concentrations of methane are leading to an increasing greenhouse effect in the Earth's atmosphere."

Gases that trap heat in the atmosphere are called greenhouse gases, in large part because they absorb certain wavelengths of energy emitted by the Earth. As their atmospheric concentrations change, the scientific community expects the amount of energy absorbed by these gases to change accordingly, but prior to this study, that expectation for methane had not been confirmed outside of the laboratory.

The scientists analyzed highly calibrated long-term measurements to isolate the changing greenhouse effect of methane. They did this by looking at measurements over the wavelengths at which methane is known to exert its greenhouse effect and coupled those with a suite of other atmospheric measurements to control for other confounding factors, including water vapor.

This study was enabled by the comprehensive measurements of the Earth's atmosphere that the DOE has routinely collected for decades at its Atmospheric Radiation Measurement (ARM) facilities, and conversely, would not be possible without such detailed observations.

The DOE ARM program manages and supports three long-term atmospheric observatories - the Southern Great Plains observatory in Oklahoma, the North Slope of Alaska observatory in far-northern Alaska, and the Eastern North Atlantic observatory on the Azores Islands.

The program also deploys three ARM mobile facilities and several ARM aerial facilities. Together, these assets enable scientists to perform highly-detailed, targeted investigations to advance the fundamental scientific understanding of the Earth system.

The researchers believe this type of direct field observation can provide a more accurate and complete picture of the relationship between atmospheric greenhouse gas concentrations and their warming effect on Earth's surface.

UNSW launches 'world-first' e-waste microfactory

The University of New South Wales' microfactory uses a variety of modules that sort and transform discarded laptop and smartphone parts into reusable materials.

Jonathan Chadwick

By Jonathan Chadwick | April 5, 2018 -- 00:39 GMT (17:39 PDT) |

The University of New South Wales (UNSW) has launched what it calls the world's first e-waste microfactory in an effort to reduce Australia's electronic waste.

Following research at the university's Centre for Sustainable Materials Research and Technology (SMaRT Centre), the microfactory has been launched as the first in a series under development at UNSW that can turn consumer waste such as discarded smartphones and laptops into reusable materials.

According to UNSW, the microfactory has the potential to reduce Australia's vast amounts of e-waste causing environmental harm and offers an alternative to practices such as burning or burying e-waste.

"Our e-waste microfactory and another under development for other consumer waste types offer a cost-effective solution to one of the greatest environmental challenges of our age, while delivering new job opportunities to our cities but importantly to our rural and regional areas, too," said UNSW professor Veena Sahajwalla.

"These microfactories can transform the manufacturing landscape, especially in remote locations where typically the logistics of having waste transported or processed are prohibitively expensive," she added. "This is especially beneficial for the island markets and remote and regional regions of the country."

The microfactory is formed of separate modules through which waste materials are passed. Discarded devices, such as computers, phones, and printers, are first broken down, before being scanned by a robotic module for the identification of useful parts, which are then transformed into valuable materials using a controlled temperature process.

Computer circuit boards can be turned into valuable metal alloys such as copper and tin that can be used as metal components, the university said, while in another module, glass and plastic can be converted into micromaterials that can be used in industrial grade ceramics and plastic filaments for 3D printing.

UNSW's aim is to create initiatives for industry to take-up the technology, and is already in partnership with recycler TES and mining manufacturer Moly-Cop.

According to a United Nations University study, Oceania -- the region comprising Melanesia, Micronesia, Polynesia, and Australasia -- generated 0.9 million tonnes of e-waste as of 2014 and 15.2kg of e-waste per capita.

Asia generates the highest number of e-waste and is the largest consumer of electrical and electronic equipment, the study said. Singapore and Hong Kong are among the biggest dumpers of e-waste in the East and Southeast Asian region, generating 21.7kg and 19.95kg, respectively, per capita.

Between 2010 and 2015, e-waste grew by 63 percent to 12.3 million tonnes, the study added.

Brazil leads in e-waste generation in LatAm

The amount of electronic waste in the region is steadily growing and countries still lack specific regulations, says study.

Singapore e-waste programme lets consumers mail unwanted devices

Singtel and Singapore Post have teamed up to offer a recycling scheme that lets consumers mail their unwanted electronic devices or dispose them in bins placed at selected locations.

Singapore, Hong Kong among biggest e-waste dumpers in region

Hong Kong and Singapore in 2015 produced the highest volume of e-waste in the East and Southeast Asian region, generating 21.7kg and 19.95kg, respectively, per capita.

Between 2010 and 2015, e-waste grew by 63 percent to 12.3 million tonnes, the study added

Strawberries Top the 'Dirty Dozen' List of Fruits and Vegetables With the Most Pesticides

In the latest report about pesticide residues, the Environmental Working Group says that 70% of conventionally grown fruits and vegetables contain up to 230 different pesticides or their breakdown products.

The analysis, based on produce samples tested by the U.S. Department of Agriculture, found that strawberries and spinach contained the highest amounts of pesticide residues. One sample of strawberries, for example, tested positive for 20 different pesticides, and spinach contained nearly twice the pesticide residue by weight than any other fruit or vegetable.

The two types of produce topped the EWG ranking of the 12 fruits and vegetables with the highest concentrations of pesticides—the so-called “Dirty Dozen.” After strawberries and spinach come nectarines, apples, grapes, peaches, cherries, pears, tomatoes, celery, potatoes and sweet bell peppers. More than 98% of peaches, cherries and apples contained at least one pesticide.

This year’s list nearly mirrors the one from last year, suggesting that little has changed in how these crops are grown. (The analysis applied only to produce that wasn’t grown organically.)

How dangerous is the exposure to the chemicals? Since federal laws in 1996 mandated that the Environmental Protection Agency (EPA) study and regulate pesticide use for its potential to harm human health, many toxic chemicals have been removed from crop growing. But studies continue to find potential effects of exposure to the pesticides still in use. A recent study, for instance, indicated a possible link between exposure to pesticides in produce and lower fertility.

More studies are needed to solidify the relationship between current pesticide exposures from produce and long-term health effects. In the meantime, researchers say that organic produce generally contains fewer pesticide residues, and people concerned about their exposure can also focus on fruits and vegetables that tend to contain fewer pesticides. Here is the EWG’s list of the fruits and vegetables lowest in pesticide residue—the so-called Clean 15:

Avocados

Sweet corn

Pineapples

Cabbage

Onions

Frozen sweet peas

Papayas

Asparagus

Mangoes

Eggplants

Honeydews

Kiwis

Cantaloupes

Cauliflower

Broccoli

How fake meat might feed your dog & cat while helping to fight climate change

By Larissa Zimberoff / Bloomberg / WP Bloomberg

Posted Apr 8, 2018 at 3:30 PM

In America’s food-obsessed landscape, the quickest route to a new idea is to look for something already being done-and then make it vegan.

Wild Earth Inc., a startup based in Berkeley, California, is doing that to pet food with lab-created proteins. Translated, that means fake meat for Fido.

The stakes are far from small potatoes. Sixty-eight percent of Americans own four-legged friends, a paw-dropping 184 million dogs and cats to be precise. To feed this mass of tail-wagging companions, we spend almost $30 billion annually. Pet food-predominantly animal-meat products-represents as much as 30 percent of all meat consumption in America.

According to a first-of-its-kind study on how that sweet blond lab on your kitchen floor impacts the environment, UCLA professor Gregory Okin writes that if American pets were to establish a sovereign nation, it would rank fifth in global meat consumption. This nation of pooches and kitties consumes about 19 percent as many calories as humans, but because their diets are higher in protein, their total animal-derived calorie intake amounts to about 33 percent that of humans.

“If you’re feeding your large dog the same as you, your dog is eating more meat than you are,” said Dr. Cailin Heinze, a Tufts faculty member and board-certified veterinary nutritionist.

Food consumption by dogs and cats is responsible for releasing up to 64 million tons of greenhouse gases every year. Developing fake meat for pets may help put a dent in that, as well as the use of water and land needed to breed all that livestock. In doing so, the industry might pave the way toward replacing all the real meat in your fridge, too.

As global human population approaches 8 billion, said Ron Shigeta, one of the founders of Wild Earth, “the opportunity here is to create something that is safe and sustainable.”

First, they’re starting with your pets. With $4 million in seed money, Wild Earth hopes to be the first pet food brand based on cellular agriculture. In 2013, Shigeta and co-founder Ryan Bethencourt started Berkeley Biolabs, followed by Indie Bio-a Bay Area synthetic biology accelerator-before getting into pet food-which, like products for human consumption, has tilted ever-more toward higher nutritional value.

The initial product Wild Earth plans to sell from its direct-to-consumer website is a koji-based dog treat. That’s a lucrative choice, apparently, since the the American Pet Products Association said dogs are given more treats than any other pet species. Market research firm Kerry reports that 34 percent of new product development for pet food last year was in treats.

Bethencourt compares his company’s production of “clean” protein to that of sake-imagine giant fermentation tanks-right down to using the same ingredient to fuel its protein growth. Koji, a fungus, is the Japanese version of baker’s yeast. It grows rapidly inside tanks, along with sugar and nutrients, at the right balmy temperature. The result is a plant-based protein with a close match to eggs or animal-based meat. Because koji is widely consumed by humans, it already has a GRAS (Generally Recognized As Safe) designation. Wild Earth’s supply chain is simple-it uses only a handful of ingredients-and easily traceable.

“Now that millennials have officially taken the reins as the primary demographic of pet owners, they stand to further develop the humanization-of-pets trend,” writes Bob Vetere, the president of APPA, in its annual pet survey. A lot of that has to do with the environment and an increased emphasis on nutrition, but that’s not all there is to it.

So far this year, there have been recalls due to Listeria, Salmonella and pentobarbital contamination. The J.M. Smucker Co., which makes Gravy Train and Kibbles ’N Bits, as well as a private label food for Walmart Inc., had to voluntarily recall its dog food when traces of pentobarbital were found. Use of fake meat may obviate risks associated with supply chains that rely on meat scraps.

The pet food space these days is red-hot. General Mills was so eager to get into the business that it paid $8 billion to acquire Blue Buffalo in February. Meanwhile, Mars Petcare US recently launched the Companion Fund, a $100-million venture fund to invest in the pet industry.

But for cutting-edge pet food born in the lab, hurdles await. To date, no cellular meat company (Memphis Meat, Just, Finless Foods, among others) has found a way to create meat from scratch in a scalable, affordable way; 31 percent of dog and cat owners already complain about the cost of pet food, the APPA said. There’s also the “ick” factor of meat made in labs, even when we’re talking about our pets, let alone when we eventually might eat it ourselves.

With pet food products ranging from offal to insects to alligators, who’s to say vegan can’t join the mix? Bethencourt and Shigeta contend that “cellular agriculture has the unique potential to rebuild the supply chain from farm-to-table.” Marion Nestle, author of several books on pet food, is skeptical: “The operative word is ‘potential,’” she said. “Let’s see how it works in practice.”

Big increase in Antarctic snowfall

By Jonathan Amos

BBC Science Correspondent, Vienna

Liz Thomas on Antarctica's greater snowfall: "As things get warmer, they get wetter."

Scientists have compiled a record of snowfall in Antarctica going back 200 years.

The study shows there has been a significant increase in precipitation over the period, up 10%.

The effect of the extra snow locked up in Antarctica is to slightly slow a general trend in global sea-level rise.

However, this mitigation is still swamped by the contribution to the height of the oceans from ice melt around the continent.

Some 272 billion tonnes more snow were being dumped on the White Continent annually in the decade 2001-2010 compared with 1801-1810.

This yearly extra is equivalent to twice the water volume found today in the Dead Sea.

Put another way, it is the amount of water you would need to cover New Zealand to a depth of 1m.

Dr Liz Thomas presented the results of the study at the European Geosciences Union (EGU) General Assembly here in Vienna, Austria.

Antarctica 'gives ground to the ocean'

Much of the extra snow has fallen on the Antarctic Peninsula

The British Antarctic Survey (BAS) researcher said the work was undertaken to try to put current ice losses into a broader context. "The idea was to get as comprehensive a view of the continent as possible," she told BBC News.

"There's been a lot of focus on the recent era with satellites and how much mass we've been losing from big glaciers such as Pine Island and Thwaites. But, actually, we don't have a very good understanding of how the snowfall has been changing.

"The general assumption up until now is that it hasn't really changed at all - that it's just stayed stable. Well, this study shows that's not the case.”

Cores have been collected across the continent - making this the largest study of its kind

Dr Thomas and colleagues examined 79 ice cores drilled from across Antarctica. These long cylinders of frozen material are essentially just years of compacted snow.

By analysing the cores' chemistry, it is possible to determine not only when their snows fell but also how much precipitation came down. For example, one key marker used to differentiate one year from the next, even seasons, is hydrogen peroxide.

This is a photochemical product that forms in the atmosphere when water vapour encounters sunlight.

"For us, that's perfect. Antarctica works like an on-off switch with the long 'polar nights' in winter and long periods of daylight in summer," Dr Thomas explained.

The previous, most extensive survey of this kind assessed just 16 cores. The new study is therefore much more representative of snowfall behaviour across the entire continent.

It found the greater precipitation delivered additional mass to the Antarctic ice sheet at a rate of 7 billion tonnes per decade between 1800 and 2010 and by 14 billion tonnes per decade when only the period from 1900 is considered.

Most of this extra snow has fallen on the Antarctic Peninsula, which saw significant increases in temperature during the 20th Century.

"Theory predicts that, as Antarctica warms, the atmosphere should hold more moisture and that this should lead therefore to more snowfall. And what we're showing in this study is that this has already been happening," Dr Thomas said.

Satellites routinely map Antarctica, but their data record is only about 25 years

The BAS researcher is keen to stress that the increases in snowfall do not contradict the observations of glacial retreat and thinning observed by satellites over the last 25 years. Although the extra snow since 1900 has worked to lower global sea level by about 0.04mm per decade, this is more than being countered by the ice lost to the oceans at Antarctica's margins, where warm water is melting the undersides of glaciers.

Dr Anna Hogg, from Leeds University, UK, uses radar satellites to measure the shape and mass of the ice sheet.

She told BBC News: "Even with these large snowfall events, Antarctica is still losing ice mass at a faster rate than it is gaining mass from snowfall, mainly due to the regions of known ice dynamic instability, such as in the Amundsen Sea Embayment which includes Pine Island and Thwaites glaciers.

"The Antarctic 4.3mm contribution to global sea level since about 1992 is still our best estimate."

Liz Thomas' research has been published in the EGU journal Climate of the Past.

To See Offshore Wind Energy’s Future, Look on Shore – in Massachusetts

A former whaling port has been retrofitted to serve the wind industry, a blade-testing center is up and running, and the state has an offshore wind power mandate.

By Jan Ellen Spiegel, InsideClimate News

Apr 9, 2018

Wind turbine blades are tested at the Wind Technology Testing Center in Boston's Charlestown neighborhood, part of the state's growing wind industry. Credit: Massachusetts Clean Energy Center

As the U.S. enters the global offshore wind market in earnest, Massachusetts is a state to watch—on shore as well as off.

In a few weeks, the state will announce which among three proposed offshore wind projects it wants to move ahead. No matter which are chosen, Massachusetts wins.

That's because more than a decade ago, the state began developing the onshore components for offshore wind, including a major offshore wind-ready port in New Bedford, a wind turbine blade testing center in Charlestown and workforce training initiatives. Officials envisioned Massachusetts as the hub for an entire future

In 2006, with a proposal for offshore wind power in the region under consideration, Matthew Morrissey began exploring what the industry could do for New Bedford, where he was the city's top economic development official. Once a major port, New Bedford's whaling days were long gone and its fishing drastically diminished.

Offshore wind, he thought, could be its resurrection. He initiated studies and began discussing the idea with lawmakers. Today he's a vice president at Deepwater Wind, one of the offshore wind developers waiting to hear about their project.

Morrissey and others say it could have come to nothing without Massachusetts' groundbreaking state policies, beginning with ambitious clean energy goals in 2008 and the state's first-in-the-nation offshore wind energy mandate. The mandate, approved in August 2016 by Republican Gov. Charlie Baker, requires the state's utilities to have long-term contracts for 1,600 megawatts of offshore wind power by June 2027.

The competitive solicitation for the first quarter of that is about to be completed, and it includes proposals from all three developers holding lease areas off New England—Deepwater Wind, Bay State Wind and Vineyard Wind.

"The smartest thing they did—they passed that legislation and then they swiftly and effectively implemented it," said Stephanie McClellan, director of the University of Delaware's Special Initiative on Offshore Wind. "They will be rewarded for that."

New Manufacturing—and Jobs

It would seem the state is already seeing those rewards. Just this month, Bay State Wind, a partnership between Denmark's Ørsted and New England's Eversource Energy, announced it would open a facility in Massachusetts to manufacture offshore wind components with steel pipe manufacturer EEW and Gulf Island Fabrication. It predicted the plant would create 500 jobs and lead to 1,200 more in the local community.

Deepwater Wind said in March it was looking at three Massachusetts locations for a wind turbine foundation assembly facility. In February, Anbaric Development Partners, a transmission and microgrid company already located in Massachusetts, received approval from the Federal Energy Regulatory Commission to develop a transmission system for offshore wind in southern New England.

Thomas Brostrøm, president of Ørsted North America, said Massachusetts' procurement mandate was the "decisive factor" that allowed offshore wind and the economic development around it in Massachusetts to move ahead.

"This gives the visibility and the kind of certainty you need when you're looking at multi-billion-dollar investments," he said. "You need to have some kind of reassurance that the state is serious about it, and I think there was a clear signal from Massachusetts."

It was an eye-opening trip to Denmark that convinced Massachusetts officials their risk would pay off. While the small nation had a modest amount of its own wind power, Denmark housed the lion's share of Europe's onshore support for the offshore wind industry.

"Massachusetts has really seen the opportunity to be the Denmark of North America," said Stephen Pike, CEO of the Massachusetts Clean Energy Center, which operates the New Bedford and Charlestown facilities. "If we can be first out of the gate, get that first mover advantage—then I think, and I think others before me have believed, that we can maintain that pole position."

The state has agreements from all three developers to use New Bedford's Marine Commerce Terminal, which opened in 2015, for at least some of their staging operations once construction begins. All three have already been using it as a home port for a survey vessel to take measurements and do other preparatory work.

The port was also a staging area for construction of the only offshore wind farm in the U.S., the Block Island Wind Farm off Rhode Island, which has five turbines able to produce 30 megawatts of power.

The Massachusetts Clean Energy Center, which began operating in 2009, completed an assessment last year of 18 additional port areas that would also be suitable for an offshore wind infrastructure network.

Erich Stephens, chief development officer for Vineyard Wind, a partnership of Avangrid Renewables and Copenhagen Infrastructure Partners, said underwater surveys conducted by the state allowed them to put together a proposal much faster. And the New Bedford terminal, designed with extra lifting capacity to handle heavy wind components, was critical.

The New Bedford Marine Commerce Terminal serves as a staging area for large offshore wind industry components. Credit: Massachusetts Clean Energy Center

"We could have figured out ways to build the project without it," he said. "But it would have meant that a lot of the jobs would have gone outside of Massachusetts and meant the project would have cost more."

As part of its proposal, Vineyard Wind has pledged up to $2 million for what it calls its Windward Workforce training program with community colleges and an area high school. It is also committing $10 million to an accelerator fund for infrastructure upgrades and business relocations and adaptations that would help build a U.S., and preferably Massachusetts-based, supply chain for the industry.

It would reduce, if not eliminate, the need to bring in components and workers from Europe, which is where just about everything the offshore wind industry needs is located now.

Massachusetts first got wind of offshore wind in 2001, when an ambitious first-in-the-nation, 130-turbine project to be called Cape Wind was proposed for Nantucket Sound. Fought by powerful interests from one end of the political spectrum to the other, the Kennedy family on the left and the Koch family on the right, it was finally declared dead last year.

But the potential it represented for offshore wind—already proven in Europe—was not lost on business and political leaders in the state.

In 2008, when the state approved ambitious cuts in greenhouse gas emissions and set higher clean energy goals, the legislature and then-Gov. Deval Patrick came up with $113 million to build the Marine Commerce Terminal, even though the only project on the horizon was the perpetually snake-bit Cape Wind.

The small Block Island Wind Farm, built by Deepwater Wind, is the nation's only offshore wind farm. Several states and companies have plans to change that.

"There is a period of time where you're investing and you're not sure it will be successful," said Sen. Marc Pacheco, the current president pro tempore of the Massachusetts Senate and the founding chair of the Senate Committee on Global Warming and Climate Change. He was a key force behind the financing, as well as the 2008 legislation, the 2016 offshore wind mandate and current legislation to increase that mandate to 5,000 megawatts by 2035.

"On the other hand," he said, "if you don't try, and you're not getting yourself positioned to be at the cutting edge of this new sector, then you could lose out."

New York State now has a goal of 2,400 megawatts of offshore wind power by 2030, and New Jersey announced a mandate for 3,500 megawatts, also by 2030.

Just How Much Offshore Potential Is There?

Since 2013, the Bureau of Ocean Energy Management (BOEM), has auctioned more than a dozen offshore wind leases in federal waters off the East Coast, about half of them from New England to New Jersey. Two more off Massachusetts and one off New York are due to be auctioned this year, and just this month BOEM said it would begin a process to look for more offshore wind locations in the Atlantic.

BOEM estimates that just the existing leased areas off New England and New York can support about 7,600 megawatts of power generation.

The Fugro Explorer has been used by all three companies for offshore surveys and underwater testing. Credit: Massachusetts Clean Energy Center

A series of reports released last year by the Clean Energy States Alliance on behalf of three of its members—New York, Massachusetts and Rhode Island—estimated that about 8,000 megawatts of offshore wind could be developed in the northeast by 2030 and that it could result in the creation of nearly 40,000 jobs.

Numbers like that, experts, state officials and wind developers say, mean that the onshore staging will require more than one port.

Possibly no one knows that better than Deepwater Wind's Morrissey. When his company developed the Block Island project, its staging required four ports, including New Bedford's.

Massachusetts officials are under no illusion that they will be able to monopolize onshore operations for the U.S. offshore wind industry indefinitely. "Competitive cooperation" is what Pike, of the Massachusetts Clean Energy Center, calls it. "The gains of one state can certainly lead to gains in the other states," he said.

New York has developed an offshore wind master plan including a workforce training financial commitment. Connecticut is also positioning its port of New London to be a major staging area. It is already home to submarine builders and does not have a hurricane barrier that can restrict larger vessels, like New Bedford has.

Pike believes Massachusetts has the early edge to get some manufacturing providing jobs for welders, electricians, carpenters. "The big kahuna is manufacturing turbines," he said.

The tricky part is convincing states to spend money now to get offshore wind installation to a level that warrants development of a U.S. supply chain to bring costs down.

University of Delaware's McClellan offers elements of how to do that: First is training a workforce. Another is structuring timing so there are not more simultaneous projects than there are workers or facilities to handle.

"This is going to be a U.S. industry. This isn't going to be a little blip on the radar screen of a global industry that's going to be looked at as kind of the orphan," she said. The states may be in competition, but "the more that they see themselves as 'all ships float or sink' is a way that we need to move in this sector on the East Coast."

At Two Power Plants, Scientists Are Racing Each Other To Turn Carbon Into Dollars

The 10 finalists in the four-year-long Carbon XPRIZE have been selected, and will embark on a two-year project to show that there’s a market for products made from captured carbon.

It sounds like something out of science fiction: 10 teams of scientists and innovators that are working on plans to converting carbon emissions into useful products will ship out to two carbon-dioxide emitting power plants. Five teams will travel to a natural-gas fired plant in Alberta, Canada, and the five others will go to a coal-powered plant in Gillette, Wyoming. There, they’ll have two years to prove the validity of their models.

This is, in fact, the final stage of the Carbon XPRIZE–a four-and-a-half-year-long, $20 million global competition to develop and scale models for converting carbon emissions into valuable products like enhanced concrete, liquid fuel, plastics, and carbon fiber. XPRIZE runs various competitions around topics ranging from water quality to public health. From 47 ideas first submitted to the challenge, a panel of eight energy and sustainability experts whittled the list down to the final 10. The two winners–one from the Canada track, and the other from Wyoming–will receive a $7.5 million grand prize to bring their innovation to market.

“We give the teams literally the pipes coming out of the power plants, and they can bring whatever technology they’re developing to plug into that source,” says Marcius Extavour, XPRIZE senior director of Energy and Resources and the lead on the Carbon XPRIZE competition. Teams will be judged on how much CO2 they convert, and the net value of their innovations.

The finalists stationed at Wyoming include C4X, a team from Suzhou, China producing bio-foamed plastics, and Carbon Capture Machine from Aberdeen, Scotland, which is making solid carbonates potentially to be used in building materials. Carbon Cure from Dartmouth, Canada, and Carbon Upcycling UCLA from Los Angeles are both experimenting with CO2-infused concrete, and Breathe from Bangalore is making methanol, which can be used as fuel.

In Alberta, Carbicrete from Montreal is making concrete with captured CO2 emissions and waste from steel production, and Carbon Upcycling Technologies from Calgary is producing nanoparticles that can strengthen concrete and polymers. CERT from Toronto is making ingredients for industrial chemicals, C2CNT from Ashburn, Virginia, is making tubing that can serve as a lighter alternative to metal, say for batteries, and Newlight from Huntington Beach, California, is making bioplastics.

“This XPRIZE is about climate change, sustainability, and getting to a low-carbon future,” Extavour says. “The idea is to take emissions that are already being produced, and preventing them from leaking out into the atmosphere or oceans or soil, and converting them, chemically, into valuable material.”

Carbon capture is not a new idea. The concept of trapping carbon emissions before they seep out from a power plant by sequestering them in the ground, or sucking them out of the air, as a facility in Zurich does, has been around for years, but not without controversy. If innovators are able to scale carbon-capture and conversion models, will it stop the push toward renewables?

The two entities sponsoring the prize certainly make it seem like that’s a possibility. One, NRG is a large energy company that manages power plants across the U.S., and Canada’s Oil Sand’s Innovation Alliance, a consortium of oil sands producers. (NRG has made efforts to reduce its emissions; it’s retiring three natural-gas fired plants across California over the next year.)

But Extavour does not see carbon conversion as antithetical to reducing emissions overall. Rather, “I think it’s complimentary,” he says. “We just don’t have the option of turning off our CO2-emitting resources today.” While emission-free options like solar, wind, and geothermal are scaling, he says, they’re not doing so fast enough to completely replace carbon. “There’s still a hard core of emissions from sectors like manufacturing that we have to get our hands around,” Extavour says.

“This isn’t about a proposal anymore,” Extavour says. “This is about: Can you build it in a way that works and is reliable? And can you do it in a way that’s not just climate and carbon sustainable, but economically sustainable? Can you build a business around this technology? Because if you can, that’s how we can get emitters of CO2 today to actually adopt these solutions and scale them up, and really take a bit out of emissions.”

Toxic flame retardants declining in NYC kids’ blood

Phase-out of the toxics is working—but every kid tested still had some levels in their blood

The levels of harmful flame retardants in children's blood are dropping every year, according to a new study of kids from New York City.

The flame retardants—polybrominated diphenyl ethers (PBDEs)— were used for decades in furniture, electronics and clothing in an effort to slow the spread of flames if they catch fire. The chemicals were voluntarily phased out starting in 2004 because they build up in the environment and people—PBDEs are found in the air (in and outside our homes), some food, and in people all around the world.

People are mostly exposed by breathing in contaminated dust. The chemicals are linked to a host of health problems, including impaired brain development, altered thyroid hormones, lower IQs in exposed children and some birth defects.

The new study in NYC, which followed 334 mothers and their children from 1998 to 2013, is the first to show a decline in PBDEs in kids' blood and shows that, despite the chemicals' persistence, bans or phase-outs can reduce children's exposure.

"These findings reinforce the decision to phase-out PBDEs from consumer products," said co author Julie Herbstman, an associate professor and researcher at the Columbia University Mailman School of Public Health, in a statement.

The initial phase-out of PBDEs in 2004 was voluntary. Since then some states have banned PBDEs and the U.S. Environmental Protection Agency and chemical companies agreed to a phase-out of almost all PBDEs by 2014.

Herbstman and colleagues tested the mothers' umbilical cord blood, and the kids' blood at ages 2, 3, 5, 7 and 9. The most common PBDE chemical—BDE-47—decreased in the blood about 5 percent every year. When they only looked at the blood after birth, the levels dropped about 13 percent every year.

This isn't the first time phase-outs have been linked to sharp decreases in PBDE exposure: in 2016, researchers reported that levels of the chemicals in Bay Area women's breast milk dropped nearly 40 percent about a decade after California banned the compounds in 2006.

However, lead author Whitney Cowell, a pediatric environmental health research fellow at Mt. Sinai, cautioned about being overly optimistic: The chemicals "continue to be detected in the blood of young children nearly 10 years following their removal from U.S. commerce," she said in a statement.

They found PBDEs in 80 percent of the cord blood samples and every single sample of kids tested from 2 to 9 years old. She also pointed out that since the PBDE phase-out began, scientists have been finding replacement chemicals in children's blood.

As PBDEs are replaced by other chemicals, Cowell and colleague wrote that increasingly PBDE-contaminated products will end up in landfills, so the chemicals may more frequently leach into water supplies—which could "trigger a transition in human exposure pathways from dust to dietary sources," they wrote.

Ten Years After California Ban, Big Drop in Fire Retardants in Breast Milk

By Bill Walker, Vice President and Editor in Chief

WEDNESDAY, MARCH 2, 2016

The science of biomonitoring – measuring the chemical pollution in people – produces a seemingly unbroken stream of horror stories, with study after study reporting a new toxic threat building up in our bodies. So when a study shows declining levels of toxic chemicals in people, it’s good news – and encouraging proof that citizen action against hazardous chemicals works.

In 2002, California state scientists tested the breast milk of Bay Area women and made a shocking finding: Levels of a class of industrial chemicals that can permanently harm the nervous system and development of fetuses and infants were the highest ever measured in the world – up to 60 times higher than in European women.

The chemicals were polybrominated diphenyl ethers, or PBDEs, used as fire retardants primarily in furniture and electronics. The extremely high levels in Bay Area women were due to of California’s strict fire-resistance standards for upholstery, making the state the heaviest user of the type of PBDEs most likely to build up in people, animals and the environment.

EWG and other environmental groups launched a campaign to ban or restrict PBDEs, which studies showed were not needed for fire safety. We tested for PBDEs in fish in San Francisco Bay, in breast milk in American women nationwide and in dust from the women’s houses, each time finding dangerously high levels.

California lawmakers quickly enacted a ban on PBDEs, taking effect in 2006. A dozen other states followed suit and the U.S. Environmental Protection Agency negotiated an agreement with chemical companies to phase out most PBDEs by 2014.

California scientists who conducted a follow-up study to assess the effect of the ban last month reported that levels of PBDEs in the breast milk of Bay Area women have dropped by almost 40 percent.

“This is good news for parents and children,” Barbara Lee, director of the California Department of Toxic Substances Control, said in a news release. “It shows that by taking action on harmful chemicals in consumer products we can reduce our uptake of those chemicals and better protect public health.”

The findings show the effectiveness of the state’s Safer Consumer Products program, one of the first in the U.S. to implement the principles of so-called green chemistry.

“This is the goal of the Safer Consumer Products program, which is asking manufacturers who market their products in California to find safer alternatives for the toxic chemicals in their products,” said Meredith Williams, the program’s deputy director.

The news isn’t all good. The study found that despite the decline, babies born to all the women were exposed to some PBDEs, almost a third of them to very high concentrations of the chemicals. And in 2014 a study from EWG and Duke University found that the class of fire retardant chemicals that has replaced PBDEs, including some known to cause cancer, is building up in the bodies of mothers and their children.

The U.S. Consumer Products Safety Commission is considering a petition from scientists and advocates, including EWG, to ban this new generation of fire retardants from children’s products, furniture, mattresses and household electronics. Once again, citizens are taking action to push government regulators to do their job and protect public health.

Ultimately, the only way to prevent trading one group of bad chemicals for others that may be just as dangerous is to reform the nation’s Toxic Substances Control Act to ensure that chemicals are proven safe before they’re allowed on the market – and before they show up in people’s bodies.

This is a ‘test case’ for whether Trump is really serious about saving coal and nuclear plants

By Steven Mufson April 3 at 1:56 PM

A broad array of critics are urging the Trump administration to ignore pleas to use its emergency powers for a bailout of nuclear and coal plants owned by a unit of the politically well-connected FirstEnergy.

FirstEnergy Solutions, a subsidiary of the FirstEnergy holding company, filed for bankruptcy last weekend and, warning of a “power crisis,” beseeched the Energy Department to invoke emergency powers to keep open four nuclear reactors and several coal plants in the name of grid reliability.

The utility comes to the issue well armed with lobbyists with connections to Congress and the Energy Department. It has spent on average more than $2 million a year for the past seven years on lobbying. Its top lobbyist last year was Jeff Miller, who was campaign manager for the presidential campaign of Rick Perry, now energy secretary. CNN reported that President Trump will attend a private dinner hosted by Miller Wednesday night.

But environmental groups, former regulators, rival utility executives and consumer groups have declared that there is no emergency and that tilting the scales in FirstEnergy Solutions’ direction would undermine the free-market competition that has helped consumers keep low rates.

“It’s corporate welfare, and it is not something we should tolerate because all it does is make consumers pay more for power plants that should go through belt-

tightening or leave the market,” Abraham Silverman, general counsel for the utility NRG, said in an interview.

“We have a set of rules,” Silverman added. “We all know what those rules are. We’re all playing by the same rules. If you start taking an individual company’s plants and say the rules of the market don’t apply to you, we start losing the competitiveness of markets.”

The bankruptcy filing is the latest cry for government assistance by FirstEnergy and one of its main suppliers of coal, Murray Energy.

“Trump seems to have a soft spot for coal, and this will be a test case for whether Trump’s policies are having an effect on the coal industry and bringing it back,” said Jack M. Tracy II, head of legal analysis at Debtwire, a New York research firm.

FirstEnergy chief executive Charles E. Jones — who earned $13.9 million last year — met with President Trump in mid-2017 to ask for assistance to coal and nuclear plants. Earlier, in March 2017, Murray Energy chief executive Robert E. Murray had urged Perry to declare an emergency to help coal plants and mines.

Perry instead asked the independent Federal Energy Regulatory Commission to factor in resilience and reliability and to subsidize coal and nuclear plants that had 90 days of fuel on site. FERC — whose new chairman, Kevin J. McIntyre, had FirstEnergy as a client at the firm Jones Day — rejected that request in January.

Now FirstEnergy Solutions is trying a new tactic, known as a 202(c) application, which asks the Energy Department to invoke an emergency and force the regional grid operator to pay nuclear and coal plants a guaranteed profit if they’re capable of storing 25 days’ worth of fuel on site.

It also has threatened to close down four nuclear power units with more than four gigawatts of capacity.

The Energy Department said in an email that the application “is now under review.”

“We see the request to DOE as a last-ditch effort that is very unlikely to succeed,” said a note to investors by the advisory firm Height Securities.

“It’s not an emergency,” said Vincent P. Duane, general counsel at PJM Interconnection, a regional transmission system that covers all or parts of 13 states and the District. “It’s a fair question to ask what may happen in three years,” he said, but he added: “We’re not expecting anything will happen as far as these plants’ performance in the near or medium term.”

Regional transmission systems such as PJM use the lowest-cost power plants and keep costs low. They also plan ahead several years to make sure there is enough power supply. Duane noted that the FirstEnergy nuclear plants will continue to operate during bankruptcy and beyond.

FirstEnergy appears to be trying to upset that planning and balance. It has said it will not participate in PJM’s next auction, covering 2021 to 2022. Moody’s analyst Toby Shea, an expert on the PJM regional grid, said taking the nuclear units out of the auction could boost power costs sharply, indirectly benefiting coal plants in the region.

But Shea added that the nuclear plants are “uneconomic” and amount to just 2 percent of PJM’s capacity. One unit needs to spend hundreds of millions of dollars replacing a steam generator, he said.

PJM’s Duane said the emergency provision wasn’t designed for this kind of issue but for very specific purposes, such as “continuance of a war,” and only to “require by order temporary connections of facilities.” One example: It kept open an Alexandria, Va., coal-powered plant because of national security concerns about blackouts in the nation’s capital.

“This looks very much to us like a corporate decision,” Duane said. “We don’t see immediate reliability concerns. And we’re not expecting any long-term reliability concerns.”

FirstEnergy has repeatedly sought to obtain government help in the past, many analysts say. The company is the product of a merger that took place in 1997. Like many utilities at the time, FirstEnergy sought to shrink its regulated business, where public utility commissions approved unspectacular but reliable returns. Instead, utilities rushed to take advantage of new competitive electricity markets. FirstEnergy put its merchant assets in a subsidiary, FirstEnergy Solutions.

For a while, the companies made considerable amounts of money. But then market conditions changed, and FirstEnergy and other utilities rushed back to the protection of utility commissions. FirstEnergy has said it would exit competitive markets by the middle of this year.

FirstEnergy also took part in a huge misguided merger with Allegheny Energy in 2010 that left it holding $3.8 billion in Allegheny debt and a fleet of coal plants, just as natural-gas prices began to plummet and put coal plants out of business.

Over the years, FirstEnergy has tried continued to tryto take advantage of regulations and politics to bolster its fortunes. Dick Munson, director of Midwest energy for the Environmental Defense Fund, has said that in August 2014, FirstEnergy unsuccessfully sought $4 billion from the Ohio Public Utilities Commission to underwrite power purchase agreements that would have allowed one FirstEnergy affiliate to buy expensive power from another. FERC blocked the plan.

FirstEnergy later proposed a $4.5 billion bailout in return for keeping its headquarters in Akron, Ohio. And in 2016, it pressed the state public utilities commission to approve ratepayer subsidies to support its W.H. Sammis coal-fired power plant and Davis-Besse nuclear plant. The commission approved $131 million a year for three years and insisted the headquarters stay in Akron.

The utility also lobbied the Ohio legislature and Gov. John Kasich (R) to set aside the state renewable portfolio standards. Kasich rejected the effort.

FirstEnergy’s latest effort to pry subsidies from the government isn’t likely to succeed, analysts say. If the Energy Department did issue an emergency order, it probably would be blocked in court.

But the FirstEnergy Solutionsbankruptcy case leaves other issues in limbo. FirstEnergy Solutions includes 33 coal ash ponds and 15 landfills that require cleanup under federal regulations. The firm also has a $350 million shortfall in a trust fund set aside for decommissioning its nuclear reactors.

The Environmental Defense Fund’s Munson said that “as an environmentalist, I fear that they will use bankruptcy to shed nuclear waste and coal-ash-cleanup obligations and try to put those on the backs of ratepayers or taxpayers.”

Steven Mufson covers energy and other financial matters. Since joining The Washington Post in 1989, he has covered economic policy, China, U.S. diplomacy, energy and the White House. Earlier he worked for The Wall Street Journal in New York, London and Johannesburg. Follow @StevenMufson

Pennsylvania Superior Court rules that fracking natural gas from a neighboring property is trespassing

Landmark ruling could open the door to "hundreds of trespass lawsuits"

On Monday the Pennsylvania Superior Court issued an opinion that could have major ramifications for the hydraulic fracturing industry in the state: It states a company trespassed on a family's land by extracting natural gas from beneath their property while operating a fracking well next door.

The Briggs family owns about 11 acres of land in Harford Township in Susquehanna County. When Southwestern Energy began operating an unconventional natural gas well on the adjacent property in 2011, the Briggs declined to lease their mineral rights to the company for development. In 2015, they filed a complaint that Southwestern was trespassing by extracting gas from beneath their property without a lease.

Southwestern didn't dispute they'd removed natural gas from beneath the Briggs' land, but argued they weren't trespassing due to the "rule of capture," which says the first person to "capture" a natural resource like groundwater, gas or oil owns it, regardless of property lines.

A lower court agreed with Southwestern and issued a summary judgement in their favor, but yesterday's Superior Court opinion overturns that decision, stating that the rule of capture shouldn't apply to unconventional natural gas drilling because of key differences in the method of extraction.

"Unlike oil and gas originating in a common reservoir, natural gas, when trapped in a shale formation, is non-migratory in nature," the opinion states. "Shale gas does not merely 'escape' to adjoining land absent the application of an external force. Instead, the shale must be fractured through the process of hydraulic fracturing; only then may the natural gas contained in the shale move freely through the 'artificially created channel[s].'"

Ultimately, the Court said, "In light of the distinctions between hydraulic fracturing and conventional gas drilling, we conclude that the rule of capture does not preclude liability for trespass due to hydraulic fracturing."

The case has now been remanded to a lower court, which will rule on whether the Briggs are entitled to compensation from Southwestern Energy for trespassing on their property by taking natural gas without a lease. In the meantime, the family has been given the opportunity to further develop their trespass claim, including getting estimates of how far the subsurface fractures and fracking fluid crossed boundary lines into the subsurface of their property.

"I think this potentially has big ramifications for both drilling companies and property owners," said David E. Hess, the director of policy and communications for Harrisburg-based government affairs law firm Crisci Associates and former secretary of the Pennsylvania Department of Environmental Protection.

"If on remand the case requires compensation of the adjacent landowner for trespass as defined in the court decision, I think this could open the door to hundreds of potential similar trespass lawsuits filed all across Pennsylvania where unconventional gas well drilling occurs."

Hess pointed out it's hard to find an area in Pennsylvania's shale patch where existing natural gas extraction leases don't come up against property belonging to other landowners who didn't sell their mineral rights. He also speculated that before this ruling changes the way hydraulic fracturing operates in the state, there would likely be an attempt to clarify the law.

"I think if people perceive this as a threat to the industry," Hess said, "we'll soon see legislative attempts to redefine the rule of capture in Pennsylvania."

Monday's Superior Court opinion differs from similar cases in other states.

Referencing a case in Texas where the fracking company won (Coastal Oil & Gas Corp. v. Garza Energy Trust), the Pennsylvania Superior Court noted in Monday's ruling, "we are not persuaded by the Coastal Oil Court's rationale that a landowner can adequately protect his interests by drilling his own well to prevent drainage to an adjoining property. Hydraulic fracturing is a costly and highly specialized endeavor, and the traditional recourse to 'go and do likewise' is not necessarily readily available for an average landowner."

The Court also noted that applying the rule of capture to hydraulic fracturing is problematic, since it would allow companies to extract natural gas from anywhere without the need for a lease as long as they could set up a fracking well on an adjacent property.

Hess noted that Pennsylvania's laws are unique, so what's happened with regard to the rule of capture and hydraulic fracturing in other states is unlikely to impact how things play out here.

"I think this is going to be an important decision," he said, "but I think people will be chewing on this opinion for a long time to fully understand what it means."

Court Ruling:

http://www.pacourts.us/assets/opinions/Superior/out/Opinion%20%20ReversedRemanded%20%2010348768634826102.pdf?cb=1

Utilities Look To Electric Cars As Savior Amid Decline In Demand

"The U.S. electricity sector is eyeing the developing electric car market as a remedy for an unprecedented decline in demand for electricity.

After decades of rising electricity demand, experts say the utility industry grossly underestimated the impact of cheap renewable energy and the surge of natural gas production. For the first time ever, the Tennessee Valley Authority is projecting a 13 percent drop in demand across the region it serves in seven states, which is the first persistent decline in the federally owned agency's 85-year history.

Electric vehicles (EVs) only make up 1 percent of the U.S. car market, but utility companies are taking advantage of their growing popularity by investing in charging infrastructure and partnering with carmakers to offer rebates, says Quartz reporter Michael J. Coren. A report by the Rocky Mountain Institute, a non-profit clean energy research group, projects there could be almost 2.9 million electric cars on the road in the next five years. "

Central Texas city wants to pay people to install solar panels

By: Chris Davis

Updated: Mar 30, 2018 09:26 AM CDT

GEORGETOWN, Texas (KXAN) - A central Texas town that already uses 100 percent renewable energy is working out a plan to generate more electricity locally so it can stop buying power to meet demand.

The city of Georgetown wants to start paying property owners to let the city-owned utility install solar panels on their roofs and feed the energy into the broader power grid. The money would come either in the form of lease payments or royalties paid to eligible residential and commercial property owners.

"I think it's a very exciting chance," Bob Weimer said. He lives in Sun City and said when he heard about the proposal, he immediately contacted city leaders to say he wants to be a part of it.

"This is just a normal, natural step for us to become and stay one of the greener cities in Texas," said Weimer, who doesn't currently have solar panels.

Georgetown Solar Suitability Map The city mapped out every property in its jurisdiction and cataloged how much sun each one gets. City leaders sent the resulting map as part of a grant application to Bloomberg Philanthropies, which selected Georgetown as one of 35 "champion cities" in its 2018 Mayors Challenge, "a nationwide competition that encourages city leaders to uncover bold, inventive ideas that confront the toughest problems cities face," according to the contest's website.

Georgetown got $100,000 from the competition to plan out and refine their project, which they're calling a "virtual power plant." Later this year all the finalists will resubmit their ideas and will compete to win four $1 million prizes and one $5 million grand prize.

"Our folks in the electric utility have been kicking this idea around for a few years," Jack Daly, assistant to the city manager, told KXAN. The Mayors Challenge was a good opportunity to jumpstart the project with grant money, he said.

One-hundred percent of Georgetown's energy already comes from renewable sources, Daly said, but that includes a wind farm in the panhandle and a solar farm in west Texas. "We were thinking, 'Boy, instead of being regulated by the state grid and relying on transmitting energy long distances, wouldn't it be cool if we made all our power here in Georgetown?'"

It won't mean free energy for city residents, or even a reduction in energy bills, Daly said. What it will provide is price stability so costs are more predictable.

As part of the refining process over the coming months, Daly and utility leaders will work out how to go about paying for roof space and how much it's worth.

Even if Georgetown doesn't win any more money from Bloomberg Philanthropies, the project will continue to move forward, Daly said. The additional grant funding would provide a faster path, but without it, the city will use utility payments from anticipated growth in its customer base to fund it over the coming years.

"As the city grows we would have to invest in new generation to continue to supply additional power," Daly said. "The virtual power plant is an alternative form of generation."

Not everyone will be eligible to get panels; the selection process will be based on the solar radiation map. Property owners whose homes or businesses don't see enough sun to qualify can still take part in the project, Daly said, by storing a battery backup to help on cloudy days.

Weimer's house would qualify for panels, based on the map. His roof gets significant sunlight on at least two sides. He's hopeful he can get the city's help installing them so that he can help future generations.

"I have little grandchildren and I’d like to be able to see them grow up with good clean air," he said. "And I want to be a part of the growth of that."

US coal hasn’t set aside enough money to clean up its mines

By Mark Olalde

Published on 14/03/2018, 3:00pm

Schemes that favour coal companies in Appalachia have left a national shortfall experts said was ‘one of the biggest public failures that has gone under the radar’

Massive strip mines dig into the coal seams of the Powder River Basin in Wyoming. The basin, which also extends north into Montana, accounts for about 40% of the country’s coal production

By Mark Olalde

As the US coal industry winds down, does it have enough money set aside to clean up the vast pits, walls and broken mountains left behind?

A Climate Home News investigation has found the answer is no. Particularly in Appalachia, the land, water and health of mining communities have been put at risk by a critically underfunded system supposed to clean up after mines close.

According to national data compiled and published for the first time on Thursday, mining companies and state governments hold just $9.2 billion nationwide to ensure mining land is reclaimed if operators go bust. Experts told CHN that amount falls far short of what is needed to rehabilitate more than two million acres of mining permits the system is supposed to cover.

In the major coal states of Appalachia, coal production has halved in the past decade. But even as many mines slide toward closure, most states in the region rely on a system of pooled risk that lets companies put up just a fraction of the total costs of reclaiming their mines.

“There is not enough money in the bonds to truly remediate those problems if there were some large-scale walking away from those bonds,” said Scott Simonton, coordinator of Marshall University’s environmental science programme.

With the industry struggling to compete with cheap gas and renewable energy, mass bankruptcies could leave taxpayers with the bill for clean up. Left untreated, closed mines raise a range of environmental and community health risks, from sinkholes to acid contamination of water courses.

“It is one of the bigger public failures that has gone under the radar,” said Patrick McGinley, a law professor at West Virginia University who has 40 years’ experience in the industry.

The data covers all 23 states that produce 99% of US coal and about 5,000 mining permits.* It was gathered from responses to dozens of records requests submitted to the state environmental and mining agencies in charge of each state’s programme.

Due to greatly varying costs of reclamation from state to state and mine to mine, there is no precise way to estimate how much should be held nationally. But McGinley said the level of bonding in general was too low and in Appalachian states in particular was “preposterous, absolutely ridiculous”.

The communal funds, known as ‘bond pools’ or ‘alternative bonding systems’, have left eastern states with less money per acre – an imperfect but useful measure of the strength of a state’s bonding system – for environmental clean-up than most western states. CHN found Appalachian states hold between $2,373 per acre (Ohio) and $4,604 per acre (Maryland). Colorado, one of the best-protected mining states, holds $10,732 for every acre, and Texas bonds are $7,655 per acre.

Bond pools collect money from mining companies, and if one of them goes out of business, the pool guarantees to pay any costs that exceed other money they have set aside. But if market conditions get tough and several companies fail at once, the funds will not cover all their liabilities.

“How does taking 50 properties like that and adding them up make them any more credit-worthy?” said Luke Danielson, a former regulator and president of the nonpartisan Sustainable Development Strategies Group. “The biggest risk that they are facing is the market risk that the coal price collapses, and then that affects all of them.”

In 2015-16, companies accounting for nearly half of the coal production in the US went into some form of bankruptcy. They have since emerged from that nadir, but the massive, sudden collapse highlights the problem of sharing risk among companies that all produce the same atrophying commodity.

“It just seems to be a very fragile system. That’s the problem. It’s a system that’s designed for small failures,” Simonton said.

CHN spoke with regulators from all six states that currently rely on bond pools. Most pointed to the small number of recently-forfeited permits as evidence of the safety of the shared funds.

Lewis Halstead, deputy director of the Division of Mining and Reclamation in West Virginia’s Department of Environmental Protection, said the bankruptcies guided where the state needed to shore up its bond pools. “There was talk of the increased risk that was perceived – more than perceived – by the [Special Reclamation Fund Advisory Council] and the [Department of Environmental Protection], and we worked through that,” he said.

In part because relatively high rainfall in the east increases the risk of toxic run-off, experts such as Simonton said Appalachian mines required more money per acre to clean up than those in the drier west. In West Virginia alone, acid drainage from years of mining impairs about 2,700 miles of streams, enough to span the width of the continental US.

Peter Morgan is a senior attorney at the Sierra Club who has worked extensively on mine bonding. “One thing that is particularly clear in Appalachia is that these bonds are not being designed to capture the costs of water treatment,” he said.

What is coal mine bonding?

The environmental cleanup of America’s coal mines is guaranteed by a system in which – similar to a security deposit on an apartment – a mining company must put up a financial assurance prior to breaking ground. That money is returned to the company upon successful reclamation and closure. If a company walks away from an operation, state regulators take that bond and use it to hire contractors to finish reclamation. The specifics of this system vary slightly among states, but the goal is always to guarantee environmental reclamation in case a company abandons its mine.

West Virginia has the largest bond pools in the country, worth $150m. Members of the advisory council overseeing the state’s two shared funds told CHN the system they had in place was sufficient to cover clean-up costs.

The data tells a different story. That pot, combined with all other individual reclamation bonds held for each mine, works out at less than $3,200 an acre. A 2017 actuarial report, commissioned by the advisory council and sent to CHN by the state’s regulators, estimated clean-up costs in West Virginia ranged from $7,840 per acre for surface mines to $28,460 per acre for underground mines.

In West Virginia, 59% of all active and inactive mining permits were owned by a company that had been in some stage of bankruptcy in the previous two years, according to another 2017 report from the Office of Surface Mining Reclamation and Enforcement (OSMRE). “The state will face potential reclamation liability as a result of those bankruptcies well into the future,” it warned.

In Virginia, the company with the largest clean-up liability – A&G Coal Corporation – would need 15 times as much money to reclaim its mines than the state’s entire fund holds (see box for more detail). CHN approached A&G for comment, but could not reach a company official, despite several attempts.

Several states have recognised the problem and increased bonding in recent years. “Are we getting the job done? Absolutely,” said Courtney Skaggs, director of Kentucky’s Division of Mine Reclamation and Enforcement, who led efforts to build up that state’s bond system.

But a 2017 OSMRE report found bonds forfeited when companies prematurely shut down in Kentucky in 2016 still only covered about half the future cost of reclamation.

The problem of underfunded bond systems is not limited to Appalachia. In Oklahoma, the second lowest funded US state at $2,203 per acre, a 2010 review found that a sample of permits all had bonds that were between 25 and 50% underfunded.

Around the country, regulators have proven lenient on the industry, experts say, leading to weaker bonding nationwide and the current problems.

Why and how we investigated the coal industry’s clean-up funds

“[Mining companies] have a history of influencing public decision makers and politicians through campaign contributions and a history of the good ole boy system,” said McGinley of West Virginia University. “That’s the history of the coalfields.”

Danielson, who sat on Colorado’s Mined Land Reclamation Board for a decade in the 1980s and ‘90s, said putting a number on clean-up costs had never been scientific.

“We were politically negotiating the bond. [The companies] would negotiate. It wasn’t really based carefully on the actual costs of closure. It was based on who could throw the most elbows and who could call up politicians,” Danielson said.

* Data on mines in a few areas – Tennessee, Washington and tribal land – which accounted for only about 1% of 2016’s coal production nationally, is held by the federal government and was not made public.

Virginia

Virginia’s bond pool only holds about $8.8 million but guarantees reclamation for roughly 150 permits across several companies. That has increased slightly from $7.3m in 2012, which an actuarial report warned at the time was only enough to clean up one or two small mines. The company with the largest liability against the bond pool – A&G Coal Corporation – has $134m in reclamation liabilities covered by its 43 Virginia bonds, 42 of which are covered by the pool. This amount of liability is 15 times as much as the entire fund holds. According to data gathered by CHN, A&G only carries an additional $30.5m in reclamation bonds. About $25m of those are self-bonds, making it a significant risk to the reclamation fund.

Indiana

The Indiana Surface Coal Mine Reclamation Bond Pool Fund guarantees just more than $24m worth of companies’ reclamation liability at a dozen mine sites, according to CHN’s data. However, the pool only holds $1.25m, 5% of the cost it is meant to cover for the industry. In 2017, the Indiana Department of Natural Resources attempted to adjust fees to decrease the shortfall but had been “unsuccessful” and will work on the regulations again this year, Steve Weinzapfel, director of the department’s reclamation division, told CHN.

Ohio

According to a 2017 actuarial report commissioned by the Ohio’s Reclamation Forfeiture Fund Advisory Board, the $25.9m held in the fund would not be sufficient to withstand “shock loss,” a term for unexpected forfeitures. The fund is replenished by taxes on coal production and by fees, so it gradually grows over time. The report found that the fund is two years away from being able to withstand a single, average-sized mine forfeiting. It would take more than 150 years before the fund could handle the largest mining company it guarantees going under.

Credits:

This series was supported by grants from the McGraw Center for Business Journalism at the City University of New York Graduate School of Journalism, the Institute for Journalism & Natural Resources (IJNR) and the European Climate Foundation.

Stark Differences in Climate Impacts Between 1.5 and 2 Degrees of Warming

E360 DIGEST

MARCH 16, 2018

A difference of just half a degree of global warming, from 1.5 to 2 degrees Celsius, would mean that an additional 5 million people worldwide will have the land where their homes are located be permanently submerged underwater, according to a new study published in the journal Environmental Research Letters.

The research, led by scientists at Princeton University, analyzed the global impacts on sea levels of 1.5 degrees C of warming, the current target of the Paris Agreement, compared to 2 and 2.5 degrees. It looked at data from tide gauges across the globe and created local sea level rise projections. The scientists examined what would happen to everyday sea levels, but also to extreme sea-level events, such as storm surges.

The study found that under a 1.5 degrees C scenario, global mean sea levels could increase 1.6 feet by 2100, 1.8 feet for 2 degrees of warming, and 1.9 feet for 2.5 degrees. It also found that if nations managed to limit warming to 1.5 degrees, extreme sea-level events could still be catastrophic for coastal communities. The New York City area, for example, could experience one Hurricane Sandy-like flood event every five years by the end of the century.

“People think the Paris Agreement is going to save us from harm from climate change,” the study’s lead author, DJ Rasmussen, a graduate student at Princeton, said in a statement. “But we show that even under the best-case climate policy being considered today, many places will still have to deal with rising seas and more frequent coastal floods.”

For more information on the difference between a 1.5 and 2 degree world:

https://e360.yale.edu/features/what_would_a_global_warming_increase_15_degree_be_like

Greenwashed Timber: How Sustainable Forest Certification Has Failed

The Forest Stewardship Council was established to create an international system for certifying sustainable wood. But critics say it has had minimal impact on tropical deforestation and at times has served only to provide a cover for trafficking in illegal timber.

BY RICHARD CONNIFF • FEBRUARY 20, 2018

When the Forest Stewardship Council got its start in 1993, it seemed to represent a triumph of market-based thinking over plodding command-and-control government regulation. Participants in the 1992 Rio Earth Summit had failed to reach agreement on government intervention to control rampant tropical deforestation. Instead, environmental organizations, social movements, and industry banded together to establish a voluntary system for improving logging practices and certifying sustainable timber.

The Forest Stewardship Council (FSC) soon set standards that seemed genuinely exciting to environmental and social activists, covering the conservation and restoration of forests, indigenous rights, and the economic and social well-being of workers, among other criteria. For industry, FSC certification promised not just a better way of doing business, but also higher prices for wood products carrying the FSC seal of environmental friendliness.

A quarter-century later, frustrated supporters of FSC say it hasn’t worked out as planned, except maybe for the higher prices: FSC reports that tropical forest timber carrying its label brings 15 to 25 percent more at auction. But environmental critics and some academic researchers say FSC has had little or no effect on tropical deforestation. Moreover, a number of recent logging industry scandals suggest that the FSC label has at times served merely to “greenwash” or “launder” trafficking in illegal timber:

In a 2014 report, Greenpeace, an FSC member, slammed the organization for standing by as FSC-certified loggers ravaged the Russian taiga, particularly the Dvinsky Forest, more than 700 miles north of Moscow. Greenpeace accused FSC-certified logging companies there of “wood-mining” forests the way they might strip-mine coal, as a nonrenewable resource, and of harvesting “areas that are either slated for legal protection or supposed to be protected as a part of FSC requirements.”

A Chinese company marketing in the U.S. offered to put an FSC label on illegal wood in exchange for a 10 percent markup.

In 2015, the U.S. flooring company Lumber Liquidators pleaded guilty to smuggling illegal timber from the last habitat of the Siberian tiger in the Russian Far East. Its main supplier of solid oak flooring was a Chinese company named Xingjia, which held an FSC “chain of custody” certification, meaning it was licensed to handle FSC-certified timber. According to an investigator in the case, another Chinese company marketing to the United States offered to put an FSC label on illegal wood flooring in exchange for a 10 percent markup.

In Peru, investigators determined in 2016 that more than 90 percent of the timber on two recent shipments bound from the Amazon to Mexico and the U.S. was of illegal origin. In what it called an “unprecedented enforcement action,” the Office of the U.S. Trade Representative last October banned the main exporter in those shipments from the U.S. market. That company, Inversiones La Oroza, still boasts on its website that it “complies with the principles and criteria of Forest Stewardship Council (FSC),” though FSC finally suspended its certification in 2017.

Logs certified by the Forest Stewardship Council get stamped with the organization’s logo.

In 2015, an undercover investigation implicated an FSC-certified Austrian company, Holzindustrie Schweighofer, in illegal logging in Romania, including some in national parks and other protected areas. An FSC expert panel subsequently recommended that the organization “disassociate” from Holzindustrie Schweighofer based on “clear and convincing evidence” of illegality. FSC opted at first for suspension instead. An outcry from environmentalists soon pushed it to break ties with the company, but FSC is already working on a “roadmap” to bring Schweighofer back into certification.

The cases in China, Peru, and Romania all resulted from undercover operations by the Environmental Investigation Agency, a Washington, D.C.-based nonprofit. “We didn’t mean to go after FSC,” says David Gehl, that group’s Eurasia programs coordinator. FSC just kept turning up in the same places as a lot of illegal logging, he says. Many logging companies appeared to obtain an FSC certification for management practices on one forest, and then use it to cast a halo over their far more extensive dealings in forests elsewhere, with little regard for sustainability or even legality.

Kim Carstensen, director general of FSC International – which is based in Bonn, Germany – says the organization has acted appropriately in those cases. “I would claim that overall, our control systems are robust, solid, and continuously being developed,” he adds. “Nothing is perfect, and of course there are issues with FSC certificates. But we have very many stakeholders who point us to them and make us act on them. We have corrective actions happening constantly, and there is a lot of solidity, I think, in that system.”

Simon Counsell, executive director of Rainforest Foundation UK and an early proponent of the forest certification idea, argues that the opposite is true. His frustration with FSC led him to co-found the website FSC-Watch.com, “where you can see many, many scores of examples right across the span of FSC’s life, and all types of forests and plantations, that suggest there are still some very serious systemic problems in the FSC. One of them is that the FSC secretariat is unable and arguably unwilling to control the certifying bodies that are responsible for issuing certifications in FSC’s name.”

These certifying agencies often display a lack of expertise on visits to logging operations, says Counsell, along with “the systematic downplaying of problems that are identified, and inadequate attention to fraud and misreporting of information.” That leniency may result partly from being paid directly by the companies they are supposed to audit. The certifiers also “know they can get away with issuing certificates even to companies that are flagrantly breaking the law, without any major repercussions from FSC,” he says. Carstensen counters that FSC takes action based on independent audits of its certifying companies, and that the payment setup is no different from a corporation paying an accounting firms to audit its finances.

Money questions also handicap FSC in other ways, according to its critics. The organization’s decision-making structure consists of environmental, social, and economic (or industry) chambers, each having an equal vote. But many issues get farmed out to working groups, which can take years to reach a consensus. And the reality, says Counsell, is that environmental and social groups typically cannot match the resources and staff hours that logging companies with a financial interest at stake can devote to the process. (Carstensen counters that the environmental and social groups hold their own, in part by their ability to bring media attention to bad behavior.)

When a motion ultimately comes to the floor at FSC’s general assemblies, held every three years, each chamber has a veto, meaning the power to block any initiative that goes against its interests. But at the 2017 general assembly, “the development of a voting block by the economic chamber to kill motions,” Grant Rosomon of Greenpeace wrote afterwards, became known as “the red sea” for the red “no” cards industry voters held up in unison. “This was extremely concerning,” said Rosomon, “particularly as high priority issues for the social and environment chambers were voted against without explanation, justification, or prior engagement in the cross-chamber motion preparation process.” He called it “a turning point” in how FSC operates.

Industry has gained sway over the FSC because of competition from rival forest-certifying organizations run by industry.

Industry has also gained sway over FSC because of competition from rival forest-certifying organizations, notably the Programme for the Endorsement of Forest Certification (PEFC). David Gehl of the Environmental Investigation Agency calls PEFC “basically certification by the industry, for the industry,” minus the social and environmental chambers. Buyers often have trouble distinguishing what Greenpeace calls “fake forest certification” from the real thing. The result is that it’s more difficult for FSC to impose rigorous standards on logging companies. But the danger is that lax standards could turn the FSC into a “fake forest certification” scheme, too.

Money also skews the balance against effective certification in one other important way. Though FSC’s original purpose was to slow tropical deforestation, it has largely been absent from the tropics. Almost 85 percent of the 492 million acres of forest it has currently certified so far are in North America and Europe. “It’s as if someone outfitted an armada to sail off and fix forest management in the tropics,” says one FSC-watcher, “but instead it sailed off into the North.”

Loggers certified by the Forest Stewardship Council have cut large portions of the Dvinsky Forest in Russia, including areas that were supposed to be protected.

Loggers certified by the Forest Stewardship Council have cut large portions of the Dvinsky Forest in Russia, including areas that were supposed to be protected.

The change in direction wasn’t deliberate. Getting certified can be expensive, because of the need to set aside 10 percent of a forest for conservation, and the cost of improved labor and logging practices. Logging companies in developed countries are often better situated than their tropical counterparts to pay for those changes, or have already made those changes to comply with local laws. The result, says Counsell, is that FSC “is essentially rewarding forestry that’s already better because there is a better forest regulatory regime. It’s failing to transform those countries in the tropics, in the south, where there isn’t a good forest regime, indicating that as a voluntary measure it really isn’t adequate to change practices. And that of course is a big concern.”

A 2016 meta-analysis of scientific studies found that FSC certification in the tropics has reduced degradation and improved labor and environmental conditions in the affected forest — no small accomplishment. But other rigorously designed studies looking at overall deforestation indicate that FSC has had little or no effect. That may be yet another money question, says Allen Blackman, an economist at Resources for the Future and lead author of a 2015 study on FSC certification in Mexico. Small-scale, poorly-performing logging operations are common in the tropics, and they aren’t the ones likely to get certified. FSC may also have had little effect on deforestation for the simple reason that “a lot of the deforestation in developing countries is not happening associated with forestry operations,” says Blackman. Instead, the driving factor is illegal land use change, meaning conversion of natural forests to palm oil plantations, commercial agriculture, and ranching. The apparent ineffectiveness of certification, Blackman and his co-authors conclude, should “give pause to policymakers” thinking of certification as a tool for addressing deforestation.

Combined with the recent evidence of blatant illegality by FSC-certified companies, it might also give pause to consumers who have put their faith in the FSC label. It might give pause to the entire wood products industry, which has profited up to now by turning a blind eye to that illegality. Sooner or later, industry will have to face up to the painful reality that it needs a far more rigorous forest certification scheme, combined with governmental regulation — for instance, to stop those land conversions — if it wants there to be any forests left to profit from in the future.

Richard Conniff is a National Magazine Award-winning writer whose articles have appeared in The New York Times, Smithsonian, The Atlantic, National Geographic, and other publications. His latest book is "House of Lost Worlds: Dinosaurs, Dynasties, and the Story of Life on Earth."

Check out this info graphic: https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iBAuQOwqXr.o/v1/800x-1.png

Global energy giants forced to adapt to rise of renewables

Companies face world where falling cost of solar and wind power pushes down prices

Adam Vaughan

Sat 17 Mar 2018 03.51 EDT First published on Fri 16 Mar 2018 12.00 EDT

Seven years after an earthquake off Japan’s eastern coast led to three meltdowns at the Fukushima Daiichi nuclear power station, the aftershocks are still being felt across the world. The latest came last Saturday when E.ON and RWE announced a huge shakeup of the German energy industry, following meetings that ran into the early hours.

Under a complex asset and shares swap, E.ON will be reshaped to focus on supplying energy to customers and managing energy grids. The company will leave renewables. RWE will focus on power generation and energy trading, complementing its existing coal and gas power stations with a new portfolio of windfarms that will make it Europe’s third-biggest renewable energy producer.

The major change comes two years after both groups split their green and fossil-fuel energy businesses, a result of the plan by the German chancellor, Angela Merkel, to phase out nuclear by 2022, and also the Energiewende, Germany’s speeded-up transition to renewables after Fukushima.

Coming so soon after 2016’s drastic overhaul, last week’s shakeup raises the question of what a successful energy utility looks like in Europe today. How do companies adapt to a world where the rapid growth of renewables pushes down wholesale prices, and the electrification of cars begins to be felt on power grids?

Peter Atherton, an analyst at Cornwall Insight, said the deal showed that E.ON and RWE did not get their reorganisation right two years ago. It marks a decisive break with the old, traditional model of a vertically integrated energy company that generates energy, transports it and sells it.

In the noughties, the conglomerate model was seen as a way for energy firms to succeed, leading to a wave of mergers. Some, such as Italy’s Enel and Spain’s Iberdrola (ScottishPower’s owner) are still pursuing this “do everything” model.

But by and large, companies are being broken up and becoming more specialised. “What you’re certainly seeing is companies taking bets about where the value will be,” said Atherton.

RWE argues that today the only way to compete in European government auctions for renewable energy subsidies is to go big. Rolf Martin Schmitz, the group’s chief executive, said: “Critical mass is the key in renewable energy. Before this transaction, neither RWE nor E.ON was in this position.”

After the deal, RWE will have around 8GW of renewable capacity and another 5GW in the pipeline, which will together account for 60% of its earnings by 2020.

Schmitz also said that a new type of company was needed to thrive in a world where windfarms and other green energy projects would soon have to succeed on market prices, not government subsidies. “Renewables will evolve from a regulated business to market competition,” he said.

E.ON, meanwhile, is majoring on supplying people with energy and services, and will grow from 31 million customers to roughly 50 million after the deal. It will also have a much greater proportion of its earnings – 80%, up from 65% – coming from the regulated, lower-return but lower-risk business of energy networks.

John Feddersen, chief executive of Aurora Energy Research, said the two firms were going in very different directions, but the path E.ON had taken was less well trodden.

“This is to some extent a question of try it and see what works. [For E.ON], owning grids and lobbying government for good regulatory outcomes is a well-understood business. However, the [customer] services side is untried,” he said.

The changing nature of power generation in Europe has been felt most keenly in Germany because of the Energiewende, but industry-watchers say the same pattern is driving companies to transform themselves across the continent. In the UK, British Gas owner Centrica is halfway through a sometimes painful reinvention of itself as a customer-centric energy company, divesting its old, large power stations to focus on selling services such as smart heating systems, as well as gas and electricity.

The UK’s second-biggest energy firm, SSE, is moving in the opposite direction. It is getting out of domestic energy supply, banking instead on regulated networks and renewable power generation, where prices are guaranteed.

The picture is further complicated by the entrance of big oil, which is taking serious steps to diversify out of oil and gas and into the world of energy utilities. Norway’s Statoil last week rebranded itself as Equinor to reflect its transformation into a “broad energy” company that deploys windfarms as well as oil rigs.

Shell recently bought the UK’s biggest independent household energy supplier, First Utility, and has also acquired firms in electric car infrastructure. That puts it in direct competition with E.ON, which promised to roll out charging points faster as a result of the asset swap.

Investors seem to like the paths E.ON and RWE have taken, with big bumps in the share prices of both after the deal. But no one knows if we will be back here in two years’ time. “I would view this very much as a test of the right structure for an energy company,” said Feddersen.

Kelp Farms and Mammoth Windmills Are Just Two of the Government’s Long-Shot Energy Bets

By Brad Plumer, March 16, 2018

. A project presented this week at an energy research conference proposes using tiny robots to farm seaweed for use in biofuels.

Off the coast of California, the idea is that someday tiny robot submarines will drag kelp deep into the ocean at night, to soak up nutrients, then bring the plants back to the surface during the day, to bask in the sunlight.

The goal of this offbeat project? To see if it’s possible to farm vast quantities of seaweed in the open ocean for a new type of carbon-neutral biofuel that might one day power trucks and airplanes. Unlike the corn- and soy-based biofuels used today, kelp-based fuels would not require valuable cropland.

Of course, there are still some kinks to work out. “We first need to show that the kelp doesn’t die when we take it up and down,” said Cindy Wilcox, a co-founder of Marine BioEnergy Inc., which is doing early testing this summer.

Ms. Wilcox’s venture is one of hundreds of long shots being funded by the federal government’s Advanced Research Projects Agency-Energy. Created a decade ago, ARPA-E now spends $300 million a year nurturing untested technologies that have the potential — however remote — of solving some of the world’s biggest energy problems, including climate change.

This week at a convention center near Washington, thousands of inventors and entrepreneurs gathered at the annual ARPA-E conference to discuss the obstacles to a cleaner energy future. Researchers funded by the agency also showed off their ideas, which ranged from the merely creative (a system to recycle waste heat in Navy ships) to the utterly wild (concepts for small fusion reactors).

Consider, for instance, wind power. In recent years, private companies have been aiming to build ever-larger turbines offshore to try to catch the steadier winds that blow higher in the atmosphere and produce electricity at lower cost. One challenge is to design blades as long as football fields that will not buckle under the strain.

At the conference, one team funded by ARPA-E showed off a new design for a blade, inspired by the leaves of palm trees, that can sway with the wind to minimize stress. The group will test a prototype this summer at the Department of Energy’s wind-testing center in Colorado, and ARPA-E has connected the team with private companies such as Siemens and the turbine manufacturer Vestas that can critique their work.

While there are no guarantees, the researchers aim to design a 50-megawatt turbine taller than the Eiffel Tower with 650-foot blades, which would be twice as large as the most monstrous turbines today. Such technology, they claim, could reduce the cost of offshore wind power by 50 percent.

Or take energy storage — which could enable greater use of wind and solar power. As renewable energy becomes more widespread, utilities will have to grapple with the fact that their energy production can fluctuate significantly on a daily or even monthly basis. In theory, batteries or other energy storage techniques could allow grid operators to soak up excess wind energy during breezy periods for use during calmer spells. But the current generation of lithium-ion batteries may prove too expensive for large-scale seasonal storage.

It’s still not clear what set of technologies could help crack this storage problem. But the agency is placing bets on everything from novel battery chemistries to catalysts that could convert excess wind energy into ammonia, which could then be used in fertilizer or be used as a fuel source itself.

At the summit, Michael Campos, an ARPA-E fellow, also discussed the possibility of using millions of old oil and gas wells around the Midwest for energy storage. One idea would use surplus electricity to pump pressurized air into the wells. Later, when extra power was needed, the compressed gas could drive turbines, generating electricity. A few facilities like this already exist, though they typically rely on salt caverns. Using already-drilled wells could conceivably reduce costs further.

“This is a very early stage idea,” Dr. Campos told the audience. “I’d love to hear from you if you have ideas for making this work — or even if you think it won’t work.”

Other projects focused on less-heralded problems. A company called Achates Power showed off a prototype of a pickup truck with a variation on the internal combustion engine that it hoped could help heavy-duty trucks get up to 37 miles to the gallon — no small thing in a world in which S.U.V. sales are booming. Several other ventures were tinkering with lasers and drones to detect methane leaks from natural gas pipelines more quickly. Methane is a far more potent greenhouse gas than carbon dioxide.

Looming over the conference, however, was the murky future of the agency itself. The Trump administration, which favors more traditional sources of energy such as coal, has proposed eliminating the agency’s budget altogether, arguing that “the private sector is better positioned to advance disruptive energy research.”

So far, Congress has rejected these budget cuts and continues to fund the agency. But the uncertainty echoed throughout the conference, even as Rick Perry, the energy secretary, sent along an upbeat video message lauding the agency’s work — a message seemingly at odds with the White House’s budget.

“We are at a crossroads,” Chris Fall, the agency’s principal deputy director, told the attendees. “But until we’re told to do something different, we need to keep thinking about the future.”

When Congress first authorized ARPA-E in 2007, the idea was that private firms often lack the patience to invest in risky energy technologies that may take years to pay off. Many solar firms, for instance, are more focused on installing today’s silicon photovoltaic panels than on looking for novel materials that might improve the efficiency of solar cells a decade from now.

Because energy technologies can take years to reach fruition, the agency does not yet have any wild success stories to brag about. By contrast, a similar program at the Pentagon created in the 1950s, the Defense Advanced Research Projects Agency, or DARPA, can fairly claim to have laid the groundwork for the internet.

Instead, ARPA-E’s defenders have to cite drier metrics, like the fact that 13 percent of projects have resulted in patents, or that its awardees have received $2.6 billion in subsequent private funding.

In a review last year, the National Academies of Sciences, Engineering and Medicine concluded that “ARPA-E has made significant contributions to energy R&D that likely would not take place absent the agency’s activities.” The report added, “It is often impossible to gauge what will prove to be transformational.”

Brad Plumer is a reporter covering climate change, energy policy and other environmental issues for The Times's climate team. @bradplumer

How a One-Armed Surfer Plans to Fix Wildfires

With an environmentally friendly firefighting gel called Strong Water

Wes Siler, Mar 9, 2018

Last year was the most expensive wildfire season on record. The U.S. Forest Service spent $2 billion fighting blazes in 2017, and those fires still destroyed more than $12 billion in property—in California alone. In the 13 western states, the total value of homes threatened by wildfire now tops $500 billion, and due to climate change, that threat is only going to get worse.

Plus, the way we fight fires isn't great for the environment. The vast quantities of water dumped and sprayed on blazes can wash toxic debris into groundwater. Worse, the foam and thickening agents used by firefighters are toxic, adding to pollution and potentially threatening the health of firefighters and the public. Finally, during a drought, water can be difficult to find nearby and thus require expensive transportation by land or air.

In 1993, Jeff Denholm was working as a fisherman when he lost his right arm to a trawler’s driveshaft off the coast of Alaska. After surviving the 21-hour trip to the nearest hospital and the subsequent amputation, he set about designing a prosthesis that would enable him to continue surfing. Denholm has since created similar prosthetic arms for skiing and mountain biking, and he now works as a surf and paddleboard ambassador for Patagonia.

Denholm also owns a fleet of fire trucks that he rents to the U.S. Forest Service to help it fight wildfires, which is how he became aware of the problems with existing firefighting chemicals and decided to look for a solution. His search led him to Steve Haddix, an engineer who had developed a biodegradable gel that promised just that. The two are now partners in Atira Systems, which makes just one product—Strong Water.

Strong Water is a gel intended to be coated on structures and even forests to make them fire-resistant. It’s distributed as a concentrate (Denholm won’t reveal exactly what's in it, but says the mixture poses no risk to humans or the environment), which is then shipped to local fire departments. They then mix it with water, and voilà—they get a toothpaste-like substance suitable for fire protection or suppression. Five gallons of Strong Water concentrate combined with 250 gallons of water can coat a 1,074-square-foot structure with three-quarters of an inch of gel.

Denholm claims that the gel, which has a six-year shelf life, increases the “value” of water to firefighters by up to 20 times. He arrives at this calculation due to the gel’s ability to “stick and stay” on vertical surfaces. Where water and foams simply run off whatever they’re sprayed on, Strong Water clings for two to eight hours. That enables considerably less water to provide much more effective fire prevention. When dropped from an airplane, Smart Water will coat the upper layers of a forest. It can be sprayed onto houses well in advance of a fire’s arrival. Because it persists through flames, the gel can prevent the fire from rekindling.

With current technology, firefighters might “write off” a home if it’s adjacent to an overwhelming amount of fuel (dead trees and brush). They simply lack the ability to effectively protect structures in those circumstances. With Strong Water, both that home and the adjacent fuel could be coated in fire-extinguishing gel an hour or two before the fire arrives, allowing firefighters to efficiently save the structure and move on to its neighbors, all well in advance of the actual fire line.

The stuff's already been approved for use in California following a three-year trial by the state’s Office of Emergency Services. Atira is working to create local stockpiles of the concentrate around the state so it’ll be available wherever it’s needed during this summer’s fire season. It’s currently being used by fire departments in San Diego County and San Bernardino County, as well as on the trucks that Denholm supplies to Forest Service Region 6 in the Pacific Northwest.

It’s difficult to fully quantify the advantages of this new technology, especially because it’s designed for such a chaotic, constantly changing application and all the methods and practices for using it have yet to be developed. Government employees like firefighters are also prevented from endorsing businesses and their products. But Strong Water has already faced its first real-world test: Atira estimates that roughly 1,000 gallons of the gel concentrate were used to fight last year’s destructive Napa and Thomas fires, protecting $200 million in homes.

As for the name? “Strong water took my arm on the Bering Sea. Strong water has provided a platform to leverage athletics into environmental activism," Denholm says. "Strong water is the nexus of all things in my life."

Please Don’t Flush the Toilet. It’s Raining.

By WINNIE HU, MARCH 2, 2018

Willis Elkins, a program manager for the Newtown Creek Alliance, at the creek in Queens. Every year billions of gallons of waste water are discharged into New York waterways when drainage is overloaded because of precipitation.

Need another excuse to put off washing dirty dishes or doing laundry?

How about this: It’s raining outside.

New York City is calling on residents in parts of Brooklyn and Queens to cut back their water use during rainstorms by postponing showers and other chores — even waiting to flush toilets. The reason is that household sewage flows into the same underground sewer pipes that also collect rainwater runoff from rooftops and streets.

When those pipes are overloaded with rainwater, the combined overflow is then discharged directly into nearby rivers, bays and creeks instead of going to wastewater treatment plants.

The new “Wait …” campaign is the latest strategy by the city’s Department of Environmental Protection to reduce so-called “combined sewer overflows” that have long polluted local waterways, closed down beaches and plagued recreational sports. Today, about 20 billion gallons of combined sewer overflows are discharged annually into waterways, down from nearly 110 billion gallons in 1985, according to city officials. Typically, about 90 percent of that combined overflow is rainwater runoff.

By now, many New Yorkers are accustomed to doing their part for the environment by recycling trash, saying no to plastic shopping bags and using air conditioning sparingly on hot summer days. But even the most conscientious residents may not necessarily make the connection between the water that swirls down their drains during a rainstorm and the resulting sewer overflows that muck up the rivers where they sail or kayak.

“We take it for granted, you flush your toilet and it goes away,” said Willis Elkins, a program manager for the Newtown Creek Alliance, a nonprofit advocacy and educational group that helped conceive the idea for the campaign. “Unfortunately, we have such a vast and old sewer system in the city, it can’t always handle excessive rain.”

Angela Licata, a deputy commissioner for the Department of Environmental Protection, said that in about 60 percent of the city’s sewer system, the same pipe is used to collect rainwater and sewage from homes and businesses, mainly in areas with older infrastructure. It was not until the 1950s that the city began building separate lines to avoid overloading the sewer system. “We’re grappling with this very difficult legacy problem,” she said.

The Newtown Creek Wastewater Treatment Plant in Brooklyn. A new program aims to get residents in parts of Brooklyn and Queens to cut back their water use during rainstorms to try to prevent the overloading of sewer pipes.

The city has spent more than $45 billion since the 1980s to improve wastewater treatment and reduce the discharge of combined sewer overflows, resulting in waterways that are the cleanest in more than a century.

It has built and upgraded wastewater treatment plants, and is spending about $1.5 billion just on green projects such as installing “curbside rain gardens” and other infrastructure in parks, playgrounds and public housing projects to absorb storm water and keep it out of the sewer system. It also plans to disinfect some sewer overflows before they are discharged from sewer lines by using a chlorination process in the pipes.

In total, the city has 14 wastewater treatment plants that process an average of 1.3 billion gallons of sewage on rainless days. They can generally handle up to 3 billion gallons per day.

The new campaign aims to reduce the combined sewer overflows into Newtown Creek, a 3.5-mile-long waterway that forms part of the border between Brooklyn and Queens; and Bowery Bay, and Flushing Bay and Flushing Creek in northern Queens.

To sign up volunteers, the city will run ads on Facebook, partner with local environmental groups, and mail fliers to about 30,000 homes in two dozen neighborhoods that lie in the drainage areas for those waterways, including Williamsburg, Greenpoint, Astoria, Steinway, Jackson Heights, Elmhurst, and Corona. The campaign will focus on those living in single-family homes because their water usage data will be easier to collect and analyze, according to city officials who hope to eventually expand it to apartment buildings too.

One recruitment pitch compares the overflows to a familiar problem: “It’s like rush hour on the freeway: there’s only so much road and if everyone uses it at the same time, it can get jammed. And just like rush hour, the best thing to do is avoid it.”

City officials will monitor real-time rainfall data at the Newtown Creek and Bowery Bay treatment plants to determine when combined sewer overflows are likely. Once gauges at the plants register a half-inch of rainfall, volunteers will be sent a text message asking them to wait to use water. After the storm ends, and plant operations return to normal, they will receive a second text thanking them.

The new campaign has cost about $120,000 to develop and put into effect.

It grew out of an earlier effort by the Newtown Creek Alliance, which enlisted several dozen residents to curtail water use during rainstorms in 2014. Two years later, officials expanded upon that idea for a six-month pilot project in neighborhoods around Newtown Creek, enlisting 379 volunteers from more than 200 buildings. Of those, only nine quit the program, mostly because they were moving away from the area.

During the pilot project, volunteers were asked to wait a total of 13 times — with the average wait lasting a little over seven hours. By the end, there was a 5 percent drop in the average daily water consumption in the buildings where the volunteers lived.

Sarah Lilley, 52, a freelance public radio producer in Williamsburg, did not find it hard to give up water at certain times. She postponed showers and kept toilet flushing to a minimum. “You’re just talking about a rainstorm, you’re not talking about a weeklong blizzard,” she said.

Ms. Lilley became so well trained that she continued to do those things on her own after the pilot project ended. “I’m the one who knows that I could have had the sink running for 45 minutes after that dinner party and I didn’t,” she said. “I know the amount of water I did not use — and I can envision it not flowing down the street, and not picking up trash as it goes.”

Another volunteer, Sarah Balistreri, 38, a school curriculum specialist who lives in Greenpoint, said she sometimes had to shower before going to work, but she made it quick. And she washed laundry if it was her only block of time to get it done for the week. “I felt bad,” she said. “I wasn’t a perfect participant.”

But those were the exceptions. “My behavior over all has changed,” said Ms. Balistreri, who kayaks on Newtown Creek and has enjoyed the benefits of cleaner water firsthand.

To make sure she never missed a text, she even programmed the number into her cellphone along with an image of a poop emoji. It’s still on her phone.

“It’s a reminder,’’ she said, “of what this is all about.”

Divesting from Big Oil a tough sell — even in the bluest cities and states

By DANIELLE MUOIO 03/07/2018 06:35 PM EST

NEW YORK — National environmental advocates flanked Bill de Blasio in January as the mayor announced the first steps toward stripping $5 billion in New York City pension fund investments from Big Oil. Fossil fuel corporations have profited from “horrible, disgusting” practices, de Blasio said, and New York has to be a “beacon to the world. … We have to show it can be done.”

But de Blasio’s proposal — which was not actually to divest, but to simply study its effects — immediately drew skepticism from New York City’s five pension boards, which worried that dropping oil and gas stocks would hurt their retirees’ financial futures. The police pension board quickly rejected the idea. The firefighters’ board tabled the notion. Trustees on the other three boards approved the study, but still expressed wariness.

It turns out that even in some of the country's bluest states and cities, the notion of threatening pensions in pursuit of a healthier environment is proving a tough sell.

Ambitious Democratic politicians across the country seeking to burnish their green credentials, like de Blasio, are increasingly taking up the flag of divestment. But they’re frequently finding themselves in an unusual battle with their traditional allies in organized labor. Unions contend that divestiture may be a politically expedient rallying cry for liberals, but they say it’s coming at the expense of real fiscal concerns about the health of public employee pension systems, which experts describe as increasingly underfunded.

“This should not be funded on the backs of our members and the city members that belong to NYCERS,” said Michael Carrube, president of the Subway Surface Supervisors Association union, whose pensions are funded through the New York City Employees’ Retirement System.

And the state's comptroller, Tom DiNapoli, has his own concerns. “We all know that public pension funds are often under attack ... so you really need to look carefully at these kinds of questions,” said DiNapoli, a divestment opponent, in an interview. “We have to really be concerned about the bottom line.”

Pecuniary concerns seem to be winning out over environmental ones from coast to coast.

In Seattle, former Mayor Ed Murray called on the city's board last April to divest from coal companies and start re-evaluating its position on fossil fuel investments. But the board ultimately voted against coal divestment in July, citing its fiduciary duty to pension holders, The Seattle Times reported.

California, arguably one of the most progressive states on environmental issues, has stalled in its push to divest in a significant way from fossil fuels, despite vocal support from powerful state leaders.

So far, despite rosy predictions by de Blasio — whose national political ambitions are no secret— New York City’s efforts also have long way to go to match the mayor’s lofty rhetoric.

The mayor and New York City Comptroller Scott Stringer, who joined him for the January announcement, presented the $5 billion divestment as a fait accompli, to the cheers of environmental advocates.

“The further we go into this, the more it becomes clear there is no financial reason not to take an important moral stance,” said Bill McKibben, co-founder of pro-divestment 350.org, who sat at the dais during the divestment press conference. “Washington, D.C., did it already. ... This is the smart money getting out.”

But Stringer and de Blasio don’t actually control how the pension funds invest — or divest — their money.

The city’s roughly $191 billion pension system is made up of five separate funds, each governed by its own board of trustees. Stringer and a mayoral representative sit on each board but control a majority on none. They share board leadership with representatives from various city unions, such as the Uniformed Firefighters Association, the International Brotherhood of Teamsters and de Blasio’s oft-antagonists at the Patrolmen’s Benevolent Association.

There is no consensus among the city’s pension fund boards that the city should pursue divestment.

A few weeks after his announcement with de Blasio, Stringer introduced a resolution merely to study the fiscal implications of the proposed $5 billion divestment. Three of the five boards — the New York City Employees' Retirements System, Teachers’ Retirement System, and the Board of Education Retirement System — voted to pass the resolution.

“What they’re really doing is studying it,” DiNapoli said. “Maybe they’re studying it with a lot of fanfare at the front end, but at this point they’ve divested as much as we’ve had, meaning nothing.”

Divesting $5 billion in holdings is no drop in the bucket, and some say it’s too risky since the city’s pension funds are already underfunded, forcing city taxpayers to increase their contributions.

The city's underfunded liabilities range from $65 billion to $142 billion, according to a report by the American Council for Capital Formation. Taxpayer contributions to the funds have also increased within that timeframe, from $1.4 billion in fiscal year 2002 to $9.3 billion in fiscal year 2017.

The report focuses on how Stringer has increased investments in underperforming yet politically palatable assets, like the Developed Environmental Activist asset class, even as the pensions continue to stagnate.

“If the plan is to divest from the carbon-rich or carbon-high producing companies and focus on [Environmental, Social And Governance Criteria], the ramifications could be pretty bad for retirees,” said Tim Doyle, vice president of policy and general counsel at the ACCF. “When they talk about it, they don’t talk about it in the terms of studying it, they talk about it in terms of it being done, and obviously that is concerning.”

Union leaders have also expressed concern that divestment could lead to higher employee contributions and, ultimately, a loss in savings. While the New York City Employees' Retirement System, by far the largest pension fund, voted to approve Stringer’s study, actually pulling the trigger on divestment will likely face more significant opposition.

“I got from it that there are a lot of really positive things, but it’s really expensive to implement,” said Roy Richter, president of the Captains Endowment Association, at a recent hearing on the topic. Richter sits on the board overseeing the Police Pension Fund.

The obstacles that organized labor presents to the fossil fuel divestment movement are not unique to New York City.

While the pressure to take action on climate change has grown with each freakish weather event, demands for action gained more traction last year, when the Trump administration withdrew from the landmark Paris accord and began rolling back Obama-era policies aimed at slashing greenhouse gas emissions.

And yet so far, divestment advocates can point to few substantial successes in the world of public pensions.

In 2016, Washington, D.C.’s largest pension fund — the District of Columbia Retirement Board — dropped $6.5 million in oil, natural gas and coal direct investments, a pittance for the $6.4 billion fund.

Stringer backed an effort to divest coal from the city’s pension system in 2015. Since then, the city has dropped its holdings in thermal coal, but that too represents a small investment overall.

Fiduciary leaders are not convinced that going further makes sense. DiNapoli is a proponent of maintaining large stakes in the fossil fuel industry in order to have a greater say in its behavior. Until recently, Stringer, a likely mayoral candidate in 2021, subscribed to the same theory.

Moreover, DiNapoli said divesting from fossil fuels would have a more consequential impact than any other divestment strategy that the state has pursued.

“When we’ve divested in the past … it was a multiyear process, and in the end we divested about $78 million of holdings in that process,” he said. “The impact on the fund is a much more consequential one for fossil fuels.”

In 2015, Kevin de León, president pro tempore of the California state Senate, sponsored a bill requiring the state’s two pension funds to divest from thermal coal and start looking into pulling out of fossil fuel holdings.

Since de Leon’s bill was signed into law, the California Public Employees’ Retirement System and California State Teachers’ Retirement System have started the process of divesting their coal funds. Both pension funds were given until July 1, 2017, to liquidate their thermal coal assets.

CalPERS has yet to disclose how much it's divested from coal but will do so for the first time in a November report, a spokesman for the pension fund told POLITICO. CalSTRS informed the legislature on Dec. 31 that it has divested all of its holdings in thermal coal companies.

In both cases, coal divestment was a relatively easy sell compared with the states’ more substantial investments in natural gas and oil.

In New York, coal investments accounted for only $33 million of the city’s five pension funds. In 2015, CalPERS had only $83 million invested in thermal coal companies, and CalSTRS had $40 million worth of holdings.

Arguments against California’s investments in fossil fuels like oil and natural gas presaged the divestment discussion at the state and city level in New York: that pensions are underfunded, and taxpayer dollars are offsetting the problem at the expense of other public sector investments.

CalPERS had $11.9 billion invested in fossil fuel companies as of 2015.

The California proposal to divest from other fossil fuels drew such an outcry from union leaders that a source familiar with de Leon’s efforts told POLITICO he isn’t planning to take the issue up again.

“The fact is when you look at the green energy companies, they are not as profitable as the fossil fuel companies,” said Steve Crouch, the director of public employees for Local 39, a California labor union. “Now is not the time to be compounding your underfunded problem. Now is definitely not the time to do it.”

Last year, California Assemblyman Ash Kalra, a San Jose Democrat, mounted a more targeted — and theoretically more practicable — effort, proposing a bill that would have compelled CalPERS to divest from companies that do business with the Dakota Access pipeline, the natural gas line that gained media attention thanks to protests from the Standing Rock Sioux Tribe. That bill didn’t go far either. Ultimately, the state Legislature approved a bill that merely required the disclosure of pipeline-related investments, but it did not force divestment itself.

Efforts in Albany to force New York divestment on a statewide level have hit similar hurdles.

In December, Gov. Andrew Cuomo pushed for DiNapoli to drop fossil fuel holdings from the state’s $200 billion pension fund.

Like in New York City and California, the state’s public union leaders recoiled. In Albany, they argued that DiNapoli would be breaching his fiduciary duties if he were to move forward with the plan.

“The net effect of this either means a loss of health and safety or wages in the future or a hit to the public perception to their effectiveness,” said Dan Levler, president of the Suffolk County Association of Municipal Employees.

At a February New York State Public Employee Conference in Albany, some union leaders urged caution in light of what happened to the CalPERS when it divested from tobacco, which reportedly cost the state $2 billion to $3 billion.

Not to be deterred, New York state Sen. Liz Krueger has introduced a bill, NY S4596 (17R), that would require DiNapoli to divest from the 200 largest publicly traded fossil fuel companies by 2022, unless he showed “clear and convincing” evidence that doing so would pose a significant risk to the funds.

“[DiNapoli] believes a model where one politician decides universally the pension plans for the state of New York is the right model,” Krueger said in an interview. “I won’t be leading the charge against that model, but I actually do think there is room [for debate] on critically important issues, and I argue the future of the planet falls under the critically important issues category.”

In a statement, Stringer spokesman Tyrone Stevens described the vote to study divestment by boards “comprising more than two-thirds of our assets” as representing a big step forward.

“We’re absolutely committed to this effort,” he said, “because we know that climate change is real — and for the good of our planet and our pension fund beneficiaries, New York City is taking action.”

City Hall spokesman Seth Stein said in a statement that divestment is moving forward.

“With the unanimous BERS vote, in addition to the earlier unanimous NYCERS and TRS votes, divestment is moving forward for the majority of the City’s approximately $190 billion pension funds,” he said. “NYC is moving full speed ahead to getting our dollars out of Big Oil."

WHO launches health review after microplastics found in 90% of bottled water

Researchers find levels of plastic fibres in popular bottled water brands could be twice as high as those found in tap water

Graham Readfearn

Wed 14 Mar 2018 21.46 EDT Last modified on Thu 15 Mar 2018 20.05 EDT

The World Health Organisation (WHO) has announced a review into the potential risks of plastic in drinking water after a new analysis of some of the world’s most popular bottled water brands found that more than 90% contained tiny pieces of plastic. A previous study also found high levels of microplastics in tap water.

In the new study, analysis of 259 bottles from 19 locations in nine countries across 11 different brands found an average of 325 plastic particles for every litre of water being sold.

In one bottle of Nestlé Pure Life, concentrations were as high as 10,000 plastic pieces per litre of water. Of the 259 bottles tested, only 17 were free of plastics, according to the study.

Scientists based at the State University of New York in Fredonia were commissioned by journalism project Orb Media to analyse the bottled water.

The scientists wrote they had “found roughly twice as many plastic particles within bottled water” compared with their previous study of tap water, .

We are living on a plastic planet. What does it mean for our health?

According to the new study, the most common type of plastic fragment found was polypropylene – the same type of plastic used to make bottle caps. The bottles analysed were bought in the US, China, Brazil, India, Indonesia, Mexico, Lebanon, Kenya and Thailand.

Scientists used Nile red dye to fluoresce particles in the water – the dye tends to stick to the surface of plastics but not most natural materials.

The study has not been published in a journal and has not been through scientific peer review. Dr Andrew Mayes, a University of East Anglia scientist who developed the Nile red technique, told Orb Media he was “satisfied that it has been applied carefully and appropriately, in a way that I would have done it in my lab”.

The brands Orb Media said it had tested were: Aqua (Danone), Aquafina (PepsiCo), Bisleri (Bisleri International), Dasani (Coca-Cola), Epura (PepsiCo), Evian (Danone), Gerolsteiner (Gerolsteiner Brunnen), Minalba (Grupo Edson Queiroz), Nestlé Pure Life (Nestlé), San Pellegrino (Nestlé) and Wahaha (Hangzhou Wahaha Group).

A World Health Organisation spokesman told the Guardian that although there was not yet any evidence on impacts on human health, it was aware it was an emerging area of concern. The spokesman said the WHO would “review the very scarce available evidence with the objective of identifying evidence gaps, and establishing a research agenda to inform a more thorough risk assessment.”

A second unrelated analysis, also just released, was commissioned by campaign group Story of Stuff and examined 19 consumer bottled water brands in the US.It also found plastic microfibres were widespread.

The brand Boxed Water contained an average of 58.6 plastic fibres per litre. Ozarka and Ice Mountain, both owned by Nestlé, had concentrations at 15 and 11 pieces per litre, respectively. Fiji Water had 12 plastic fibres per litre.

Abigail Barrows, who carried out the research for Story of Stuff in her laboratory in Maine, said there were several possible routes for the plastics to be entering the bottles.

“Plastic microfibres are easily airborne. Clearly that’s occurring not just outside but inside factories. It could come in from fans or the clothing being worn,” she said.

Stiv Wilson, campaign coordinator at Story of Stuff, said finding plastic contamination in bottled water was problematic “because people are paying a premium for these products”.

Microplastic pollution in oceans is far worse than feared, say scientists

Jacqueline Savitz, of campaign group Oceana, said: “We know plastics are building up in marine animals and this means we too are being exposed, some of us every day. Between the microplastics in water, the toxic chemicals in plastics and the end-of-life exposure to marine animals, it’s a triple whammy.”

Nestlé criticised the methodology of the Orb Media study, claiming in a statement to CBC that the technique using Nile red dye could “generate false positives”.

Coca-Cola told the BBC it had strict filtration methods, but acknowledged the ubiquity of plastics in the environment meant plastic fibres “may be found at minute levels even in highly treated products”.

A Gerolsteiner spokesperson said the company, too, could not rule out plastics getting into bottled water from airborne sources or from packing processes. The spokesperson said concentrations of plastics in water from their own analyses were lower than those allowed in pharmaceutical products.

Danone claimed the Orb Media study used a methodology that was “unclear”. The American Beverage Association said it “stood by the safety” of its bottled water, adding that the science around microplastics was only just emerging.

The Guardian contacted Nestlé and Boxed Water for comment on the Story of Stuff study, but had not received a response at the time of publication.

For climate action to take hold, activists need more than just polar bears

by Joshua Parfitt on 15 March 2018

A new study finds that people who do not have “biospheric concerns” are unconvinced by climate change arguments that hinge on such avatars as polar bears, coral reefs and pikas.

Researchers suggest policymakers, activists and the media must choose stories that hit closer to home, by focusing on the more personal impacts of climate change.

Scientists would also like to see more research on how to convince people who are largely concerned with their own narrow interests that climate change, and nature in general, matters.

Type “climate change” into any search engine and the results aren’t difficult to predict: you’ll probably see a woeful polar bear on a shrinking patch of ice. Either that or cracked, parched earth. But a new paper published in Global Environmental Change questions the power of nature to motivate climate action.

“Frequently, visual and verbal stimuli used in the media to describe threats of climate change feature plants, animals and other typical nature depictions,” said Sabrina Helm, associate professor of retailing and consumer science at the University of Arizona and lead author of the paper. “However, for people who are more concerned about possible effects on themselves, their family, or people in general […] such stimuli may not be effective.”

Helm’s paper distinguished three different forms of environmental concern among people: biospheric (concern for nature), social-altruistic (concern for other people), and egoistic (concern for oneself).

Participants in the study who showed biospheric concern were most likely to perform positive environmental behaviors. The paper concludes, however, that by catering only to biospheric concerns — and neglecting egoistic or social-altruistic concerns — policymakers and activists may be unintentionally “increasing the risks associated with delaying climate change adaptation.”

Researchers presented 342 adults in the U.S. with questions about what most concerned them regarding global environmental problems. Participants could choose from prepared answers that indicated egoistic concern (“my lifestyle,” for instance), social-altruistic concern (“my children”) or biospheric concern (“marine life”).

The study also plumbed participants’ so-called pro-environmental behaviors (PEBs), such as whether they used reusable bags, actively reduced emissions, or ate organic food.

Results indicated that whereas respondents with higher biospheric concern tended to perceive ecological stress and engage in pro-environmental behaviors, participants with social-altruistic concern were less perceptive though did engage in similar actions. Participants with higher egoistic concerns neither perceived ecological stress nor engaged in behavior to mitigate it.

Researchers believe this is because egoistic and social-altruisic concerns are seen as less vulnerable to climate change impacts than biospheric concerns.

Egoistic and social-altruistic respondents “did not seem to perceive climate change threats as having a profound effect on their own or their families’ life,” the scientists wrote in the paper.

This finding is also backed up by other psychological studies.

“We summarize that policymakers frequently emphasize climate change as a global, distant, and abstract societal risk,” said Sander van der Linden, a researcher at the Yale Program on Climate Change Communication, who was not involved in the study. Pointing to the constant use of polar bears as an avatar for climate change, van der Linden said: “Instead, we recommend that policymakers should change their approach to emphasizing the local, present, and concrete aspects of climate change as a personal risk.”

Van der Linden, who is also a psychology researcher at the University of Cambridge, co-authored a paper in 2015 outlining five “best practice” insights for how psychological research could improve public engagement with climate change.

Helm echoed van der Linden’s sentiment, encouraging the deployment of stories that “hit closer to home” for people for whom biospheric concerns do not register strongly. Some examples she suggested include linking climate change threats to issues of personal health, national security, and the well-being of future generations.

The researchers suggest a nuanced finding that instead of using shock tactics to barge down the door of indifference, perhaps climate change communication is a matter of finding the right keys to different locks.

In Helm’s paper, the scientists reference a 2009 publication by WWF-UK whose authors, evolutionary biologist Tom Crompton and psychology professor Tim Kasser, dissuade campaigners from encouraging egoism as a means to engage climate action. This is because, they argue, egoistic concerns can often engender a separation from nature: one feels superior to, rather than a part of, the natural world.

Instead, Crompton and Kasser recommend that increasing awareness of the inherent value of nature and empathy for non-human animals — in other words, biospheric concerns — is best for long-term environmental improvement.

Commenting on Crompton and Kasser’s research, Helm said that while “it may be desirable for all people to have biospheric concerns in mind,” she expressed doubt that “it’s just not a reality.”

Both Kasser and Helm agree there are people who simply don’t care much for the environment, but also that telling such people to be more sensitive to biospheric concerns is not the answer.

Kasser suggested a different way in which climate change communicators could effectively reach individuals who showed little concern for the environment: through a sensitive and empathetic approach to discover their value systems.

“Having done that, it then becomes even more possible … to engage that person in thinking about his/her behaviors and … ways that can help him/her to see how protecting the environment is actually supportive and expressive of those values,” he wrote in an email.

In Helm’s paper, individuals with social-altruistic concerns also showed fewer pro-environmental behaviors than individuals with biospheric concerns. However, where they did, the scientists hypothesized that it was because they felt their value system would be strongly affected by climate change — in this case, their children’s future.

Using this approach, communicators could both attract the attention of people with egoistic or altruistic concerns, while also promoting a message of nature’s inherent worth to every value system.

Helm expressed hope that future research might examine more links between egoistic concerns in particular and positive environmental behaviors to figure out how to motivate pro-environmental consumption and climate change mitigation.

Either way, it probably won’t involve a polar bear.

CITATIONS

Crompton, T., Kasser, T., 2009. Meeting Environmental Challenges: The Role of Human Identity. WWF-UK, Godalming.

Helm, S. V., Pollitt, A., Barnett, M. A., Curran, M. A., & Craig, Z. R. (2018). Differentiating environmental concern in the context of psychological adaption to climate change. Global Environmental Change, 48, 158-167. DOI: 10.1016/j.gloenvcha.2017.11.012

van der Lindern, S., Maibach, E., Leiserowitz, A. (2015). Improving Public Engagement With Climate Change: Five “Best Practice” Insights From Psychological Science. Perspectives on Psychological Science. Vol 10, Issue 6, pp. 758 – 763. DOI: 10.1177/1745691615598516

Sinking land will exacerbate flooding from sea level rise in Bay Area

Subsidence combined with sea level rise around San Francisco Bay doubles flood-risk area

UNIVERSITY OF CALIFORNIA - BERKELEY

Rising sea levels are predicted to submerge many coastal areas around San Francisco Bay by 2100, but a new study warns that sinking land - primarily the compaction of landfill in places such as Treasure Island and Foster City - will make flooding even worse.

Using precise measurements of subsidence around the Bay Area between 2007 and 2011 from state-of-the-art satellite-based synthetic aperture radar (InSAR), scientists from the University of California, Berkeley, and Arizona State University mapped out the waterfront areas that will be impacted by various estimates of sea level rise by the end of the century.

They found that, depending on how fast seas rise, the areas at risk of inundation could be twice what had been estimated from sea level rise only.

Previous studies, which did not take subsidence into account, estimated that between 20 and 160 square miles (51 to 413 square kilometers) of San Francisco Bay shoreline face a risk of flooding by the year 2100, depending on how quickly sea levels rise.

Adding the effects of sinking ground along the shoreline, the scientists found that the area threatened by rising seawater rose to between 48 and 166 square miles (125 to 429 square kilometers).

"We are only looking at a scenario where we raise the bathtub water a little bit higher and look where the water level would stand," said senior author Roland Bürgmann, a UC Berkeley professor of earth and planetary science. "But what if we have a 100-year storm, or king tides or other scenarios of peak water-level change? We are providing an average; the actual area that would be flooded by peak rainfall and runoff and storm surges is much larger."

The data will help state and local agencies plan for the future and provide improved hazard maps for cities and emergency response agencies.

"Accurately measuring vertical land motion is an essential component for developing robust projections of flooding exposure for coastal communities worldwide," said Patrick Barnard, a research geologist with the U.S. Geological Survey in Menlo Park. "This work is an important step forward in providing coastal managers with increasingly more detailed information on the impacts of climate change, and therefore directly supports informed decision-making that can mitigate future impacts."

The low-end estimates of flooding reflect conservative predictions of sea level rise by 2100: about one and a half feet. Those are now being questioned, however, since ice sheets in Greenland and West Antarctica are melting faster than many scientists expected. Today, some extreme estimates are as high as five and a half feet.

That said, the subsidence - which the geologists found to be as high as 10 millimeters per year in some areas - makes less of a difference in extreme cases, Bürgmann noted. Most of the Bay Area is subsiding at less than 2 millimeters per year.

"The ground goes down, sea level comes up and flood waters go much farther inland than either change would produce by itself," said first author Manoochehr Shirzaei, a former UC Berkeley postdoctoral fellow who is now an assistant professor in ASU's School of Earth and Space Exploration and a member of NASA's Sea Level Change planning team.

InSAR, which stands for interferometric synthetic aperture radar, has literally changed our view of Earth's landscape with its ability to measure elevations to within one millimeter, or four-hundredths of an inch, from Earth orbit. While it has been used to map landscapes worldwide - Bürgmann has used InSAR data to map landslides in Berkeley and land subsidence in Santa Clara County - this may be the first time someone has combined such data with future sea level estimates, he said. The team used continuous GPS monitoring of the Bay Area to link the InSAR data to sea level estimates.

"Flooding from sea level rise is clearly an issue in many coastal urban areas," Bürgmann said. "This kind of analysis is probably going to be relevant around the world, and could be expanded to a much, much larger scale."

In the Bay Area, one threatened area is Treasure Island, which is located in the Bay midway between San Francisco and Oakland and was created by landfill for the 1939 Golden Gate International Exposition. It is sinking at a rate of one-half to three-quarters of an inch (12 to 20 millimeters) per year.

Projections for San Francisco International Airport show that when land subsidence is combined with projected rising sea levels, water will cover approximately half the airport's runways and taxiways by the year 2100. Parts of Foster City were built in the 1960s on engineered landfill that is now subsiding, presenting a risk of flooding by 2100.

Not all endangered areas are landfill, however. Areas where streams and rivers have deposited mud as they flow into the Bay are also subsiding, partly because of compaction and partly because they are drying out. Other areas are subsiding because of groundwater pumping, which depletes the aquifer and allows the land to sink. In the early 20th century, the Santa Clara Valley at the south end of San Francisco Bay subsided as much as nine feet (three meters) due to groundwater depletion, though that has stabilized with restrictions on pumping.

Shirzaei noted that flooding is not the only problem with rising seas and sinking land. When formerly dry land becomes flooded, it causes saltwater contamination of surface and underground water and accelerates coastal erosion and wetland losses.

The work was supported by the National Science Foundation, National Aeronautics and Space Administration and Point Reyes Bird Observatory Conservation Science.

Running on renewables: How sure can we be about the future?

IMPERIAL COLLEGE LONDON

A variety of models predict the role renewables will play in 2050, but some may be over-optimistic, and should be used with caution, say researchers.

The proportion of UK energy supplied by renewable energies is increasing every year; in 2017 wind, solar, biomass and hydroelectricity produced as much energy as was needed to power the whole of Britain in 1958.

However, how much the proportion will rise by 2050 is an area of great debate. Now, researchers at Imperial College London have urged caution when basing future energy decisions on over-optimistic models that predict that the entire system could be run on renewables by the middle of this century.

Mathematical models are used to provide future estimates by taking into account factors such as the development and adoption of new technologies to predict how much of our energy demand can be met by certain energy mixes in 2050.

These models can then be used to produce 'pathways' that should ensure these targets are met - such as through identifying policies that support certain types of technologies.

However the models are only as good as the data and underlying physics they are based on, and some might not always reflect 'real-world' challenges. For example, some models do not consider power transmission, energy storage, or system operability requirements.

Now, in a paper published today in the journal Joule, Imperial researchers have shown that studies that predict whole systems can run on near-100% renewable power by 2050 may be flawed as they do not sufficiently account for reliability of the supply.

Using data for the UK, the team tested a model for 100% power generation using only wind, water and solar (WWS) power by 2050. They found that the lack of firm and dispatchable 'backup' energy systems - such as nuclear or power plants equipped with carbon capture systems - means the power supply would fail often enough that the system would be deemed inoperable.

The team found that even if they added a small amount of backup nuclear and biomass energy, creating a 77% WWS system, around 9% of the annual UK demand could remain unmet, leading to considerable power outages and economic damage.

Lead author Clara Heuberger, from the Centre for Environmental Policy at Imperial, said: "Mathematical models that neglect operability issues can mislead decision makers and the public, potentially delaying the actual transition to a low carbon economy. Research that proposes 'optimal' pathways for renewables must be upfront about their limitations if policymakers are to make truly informed decisions."

Co-author Dr Niall Mac Dowell, from the Centre for Environmental Policy at Imperial, said: "A speedy transition to a decarbonised energy system is vital if the ambitions of the 2015 Paris Agreement are to be realised.

"However, the focus should be on maximising the rate of decarbonisation, rather than the deployment of a particular technology, or focusing exclusively on renewable power. Nuclear, sustainable bioenergy, low-carbon hydrogen, and carbon capture and storage are vital elements of a portfolio of technologies that can deliver this low carbon future in an economically viable and reliable manner.

"Finally, these system transitions must be socially viable. If a specific scenario relies on a combination of hypothetical and potentially socially challenging adaptation measures, in addition to disruptive technology breakthroughs, this begins to feel like wishful thinking."

Current deforestation pace will intensify global warming, study alerts

In a Nature Communications article, international group of scientists affirms the prolongation of an annual deforestation of 7,000 square km can nullify the efforts for reducing GHG emissions

FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO

The global warming process may be even more intense than originally forecast unless deforestation can be halted, especially in the tropical regions. This warning has been published in Nature Communications by an international group of scientists.

"If we go on destroying forests at the current pace - some 7,000 km² per year in the case of Amazonia - in three to four decades, we'll have a massive accumulated loss. This will intensify global warming regardless of all efforts to reduce greenhouse gas emissions," said Paulo Artaxo, a professor at the University of São Paulo's Physics Institute (IF-USP).

The group reached the conclusion after having succeeded in the mathematical reproduction of the planet's current atmospheric conditions, through computer modeling that used a numerical model of the atmosphere developed by the Met Office, the UK's national meteorological service.

Such model included meteorological factors like levels of aerosols, anthropogenic and biogenic volatile organic compounds (VOCs), ozone, carbon dioxide, methane, and other items that influence global temperature - the surface albedo among them. Albedo is a measure of the reflectivity of a surface. The albedo effect, when applied to Earth, is a measure of how much of the Sun's energy is reflected back into space. The fraction absorbed changes according to the type of surface.

The work coordinated by University of Leeds (UK) researcher Catherine Scott was also based on years of analyses and survey over the functioning of tropical and temperate forests, the gases emitted by vegetation, and their impact on climate regulation. Collection of data regarding tropical forests was coordinated by Artaxo, as part of two Thematic Projects supported by the São Paulo Research Foundation - FAPESP: "GoAmazon: interactions of the urban plume of Manaus with biogenic forest emissions in Amazonia", and "AEROCLIMA: direct and indirect effects of aerosols on climate in Amazonia and Pantanal". Data on temperate forests was obtained in Sweden, Finland, and Russia. Collection was coordinated by Erik Swietlicki, a professor at Lund University in Sweden.

Understanding how tropical forests control temperature

"After adjusting the model to reproduce the current conditions of Earth's atmosphere and the rise in surface temperatures that has occurred since 1850, we ran a simulation in which the same scenario was maintained but all forests were eliminated," Artaxo said. "The result was a significant rise of 0.8 °C in mean temperature. In other words, today the planet would be almost 1 °C warmer on average if there were no more forests."

The study also showed that the difference observed in the simulations was due mainly to emissions of biogenic VOCs from tropical forests.

"When biogenic VOCs are oxidized, they give rise to aerosol particles that cool the climate by reflecting part of the Sun's radiation back into space," Artaxo said. "Deforestation means no biogenic VOCs, no cooling, and hence future warming. This effect was not taken into account in previous modeling exercises."

Temperate forests produce different VOCs with less capacity to give rise to these cooling particles, he added.

The article notes that forests cover almost a third of the planet's land area, far less than before human intervention began. Huge swathes of forest in Europe, Asia, Africa and the Americas have been cleared.

"It's important to note that the article doesn't address the direct and immediate impact of forest burning, such as emissions of black carbon [considered a major driver of global warming owing to its high capacity for absorbing solar radiation]. This impact exists, but it lasts only a few weeks. The article focuses on the long-term impact on temperature variation," Artaxo said.

Deforestation, he stressed, affects the amount of aerosols and ozone in the atmosphere definitively, changing the atmosphere's entire radiative balance.

"The urgent need to keep the world's forests standing is even clearer in light of this study. It's urgent not only to stop their destruction but also to develop large-scale reforestation policies, especially for tropical regions. Otherwise, the effort to reduce greenhouse gas emissions from fossil fuels won't make much difference," Artaxo said.

About São Paulo Research Foundation (FAPESP)

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. For more information: http://www.fapesp.br/en.

North Pacific climate patterns influence El Nino occurrences

Study suggests El Nino predictions could be improved by considering off-equatorial climate patterns.

For decades, the world's leading scientists have observed the phenomena known as El Nino and La Nina. Both significantly impact the global climate and both pose a puzzle to scientists since they're not completely understood. Now, a new study helps clear up some of the obscurity surrounding El Nino and La Nina, which together are called the El Nino Southern Oscillation (ENSO). This new study examines ENSO frequency asymmetry during different phases of the Pacific Decadal Oscillation (PDO), a climate pattern in the North Pacific.

Previous studies have investigated the relationship between ENSO and PDO but none have examined if the warm (positive) and cool (negative) phases of PDO in the North Pacific influence the frequency of ENSO events in the tropical Pacific. "For the first time," said Prof. ZHENG Fei from the Institute of Atmospheric Physics, Chinese Academy of Sciences, and coauthor on the study, "we quantitatively demonstrated that El Nino is 300 percent more (58 percent less) frequent than La Nina in positive (negative) PDO phases."

The findings were published in Advances of Atmospheric Sciences and selected as the cover article of Issue 5.

To arrive at their findings, the researchers used observational data and the output of 19 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5). "By adopting the observations and CMIP5 climate model simulations," said Zheng, "we had an opportunity to explore how PDO modulates the occurrence of ENSO." This type of exploration meets an increasing need today, which is for scientists to better understand the mechanisms that affect the occurrence of ENSO.

While all the answers surrounding these events aren't known, the effects of ENSO are well understood. When sea surface temperatures are warmer or cooler than normal in the equatorial Pacific Ocean, weather patterns around the world are impacted. Everything from pressure systems to wind and precipitation can be influenced by ENSO, including the supply of water in a region since it can cause moisture extremes.

ZHENG and his group at the Chinese Academy of Sciences have been focusing on ENSO for more than 10 years now. The results from their own recent studies showing that ENSO prediction highly relies on off-equatorial physical processes motivated this latest study exploring the effects of PDO, according to Zheng, who believes that this approach is necessary to understand ENSO better. "Our study suggests that more attention should be paid to the processes outside the equator when attempting ENSO predictions to provide reliable warning of climate extreme events and avoid potential economic loss," he said.

Why efforts to use green fuels sometimes run afoul

Some microbes thrive on biofuels and can contaminate fuel equipment and clog engines

Thriving on fatty acid derivatives in a biodiesel blend, microbes can cause a fresh fuel sample (top left) to become a turbid liquid in a couple of weeks (top right) and eventually form a slimy mess (bottom, close-up view).

Credit: Bradley Stevenson/U Oklahoma

Most people exercise a healthy dose of caution when handling fuels like gasoline and diesel. Health-wise, they know you shouldn’t deeply inhale the fuels’ vapors or splash the liquids on your skin. Ingesting them is out of the question.

But it turns out that not everyone avoids contact with fuels: Some microbes love the stuff. In fact, various microorganisms thrive on gasoline and diesel fuel, substances that are clearly toxic to humans and animals. Unfortunately, this bug love can lead to fuel contamination, clogged or fouled equipment, and if left unchecked, even engine failure.

Scientists have long known about the threat of microbial fuel fouling. But they have more cause for concern now that the popularity of biofuels, such as biodiesel, is on the rise. Some bacteria and fungi crave the generous quantities of fatty acid compounds that make up biofuels.

Private motorists have little to worry about because they tend to use and replace small quantities of fuel frequently. But for airlines and other organizations that store enormous quantities of fuel, contamination could be a problem. For the U.S. Air Force, which has a mandate to rely increasingly on biobased fuels, extreme caution is in order.

That’s why Wendy J. Goodson studies the effects of microorganisms on Air Force weapons systems and fueling infrastructure. Goodson leads a research group at the Air Force Research Laboratory at Wright-Patterson Air Force Base in Ohio. Together with collaborators, these researchers are studying the effects of fuel biocontamination, the factors that promote this type of degradation, and improved ways of detecting it. They’re also zeroing in on the identities and genomes of the troublemaking organisms and devising ways of stopping them in their tracks.

Related: Now boarding: Commercial planes take flight with biobased jet fuel

Although people may find it surprising that some microorganisms happily eat engine fuels, it’s not exactly a secret. “Biodeterioration of fuels has been known and studied for more than a century,” asserts Frederick J. Passman, a consultant who specializes in controlling microbial contamination.

The earliest study, which focused on biocontamination of gasoline, was published in 1895 in a German-language journal, Passman says. And most of the follow-up studies were published in the microbiology literature, “not in the sort of journals likely to be read by petroleum engineers and organic chemists,” he adds.

Goodson concurs. She says fuel maintenance specialists were aware that fuel could become contaminated and fouled with biological growth, but the possibility seemed remote and “resided in the background of people’s knowledge and concern.” That led to some troubles for the military.

A decade ago, as the Air Force prepared to ramp up its use of bioderived fuels to comply with green energy initiatives developed during the George W. Bush administration, researchers examined alternative fuels for materials compatibility, Goodson says. They were confirming, for example, that the fatty acid methyl esters that are found in biodiesel would not react adversely with materials in fuel tanks, hoses, pumps, O-rings, and other fueling equipment.

“But they did not consider the effects of microbiology on the fuel system,” she notes. That’s because most fuels used by the Air Force and other organizations in the past 50 years—meaning petroleum-derived fuels—are not overly susceptible to biofouling.

Fleet operators don’t normally use pure biodiesel as a transportation fuel. It’s expensive, can gel at low temperatures, and may void some engine manufacturers’ warranties. So to go green, the Air Force began using blends of conventional diesel and biodiesel in its trucks and other ground vehicles.

These fatty acid methyl esters, which are common in biodiesel, are readily metabolized by some microbes and enable the organisms to grow in fuel.

Eventually, though, that led to contaminated storage tanks, plugged fuel filters, and other problems not commonly seen before the fuel switch, Goodson says. As a result, some fuel experts pooh-poohed the idea of using alternative fuels, concluding that “bio” meant “bad.”

Not so, Goodson contends. “There are a whole set of conditions—biological, physiological, and environmental—that have to be met to set off the perfect storm of fuel degradation.” Preventing fuel fouling calls for evaluating those details thoroughly. And given that worldwide biodiesel production is on the rise—the Organisation for Economic Co-operation & Development projects an increase from 31 billion L in 2015 to 41 billion L by 2025—the U.S. military isn’t the only organization that needs to consider the effects of microbiology on fuels.

The common denominator in biocontamination of fuels, alternative and conventional alike, is water. As Passman explains, the reason is simple: Water is an essential factor for microbial activity. Yet preventing its accumulation in fuel systems is difficult.

Hot fuel leaving a refinery reactor is sterile, but it doesn’t stay that way. While sitting in tanks and pipelines, which aren’t airtight, fuel comes into contact with air and water vapor. As the fuel cools during shipping or in storage at tank farms, water becomes less soluble and condenses.

Even when tank farm operators follow best housekeeping practices, the condensed water, which is more dense than fuel, accumulates at the bottoms of tanks and low points in pipelines, forming habitats in which fuel-feeding microbes can thrive. “The volume of accumulated water may seem trivial to an engineer, but it’s an ocean to a microbe,” Passman says.

Water problems can be even worse in underground storage tanks and worse still in ones that hold biodiesel, which is more hygroscopic and therefore absorbs more water than conventional diesel does. So underground biodiesel tanks are exactly where a team led by Goodson and University of Oklahoma microbiologist Bradley S. Stevenson recently focused their attention.

In a yearlong study, the team examined the growth of microbial communities and the extent of bioinduced corrosion in several large underground storage tanks at two U.S. Air Force bases. The study was unique in that it quantitatively probed microbial activity in tanks that were actively in service—specifically, they were being used to store and dispense B20, a commercial biodiesel blend composed of 20% fatty acid methyl esters and 80% conventional ultra-low-sulfur diesel.

To look for microbial activity, the researchers immersed polymer-coated metal plates—simulating the materials that storage tanks are made of—to various depths in the tanks. They withdrew and analyzed them at regular intervals. They also collected fuel samples from the bottoms of the tanks and compared them with reference samples taken directly from fuel-delivery trucks.

What they found was that several microorganisms were wreaking biohavoc in the tanks. The list includes Acetobacteriaceae and other types of bacteria, as well as various yeasts, including Candida and Pichia. But the most prevalent microbe, and the worst actor by far, was a filamentous fungus of the genus Byssochlamys, a microorganism known to cause food spoilage. That bug, with a little help from its friends, caused the metal plates to visibly foul—become coated with orange and red slimy films—and led to fuel samples that showed varying degrees of turbidity and slime accumulation.

The study, which the team has not yet published, also showed that the most heavily fouled plates were the most pitted and corroded ones and that corrosion correlates with levels of fuel acidification. When microbes decompose the fatty acid compounds in the biofuels, they generate organic acids and CO2, species known to promote corrosion.

Now that the researchers know which bugs are the troublemakers, they are studying ways to control them. They are also aiming to understand relationships among coexisting microorganisms. As an example, Stevenson explains that the metabolic product of one microbe might be the energy source for a different microbe in that community. Such information could be helpful in combating fuel fouling.

Elsewhere at AFRL, biologist Oscar N. Ruiz works with collaborators at the University of Dayton Research Institute to fight fuel fouling in other ways. They use genomics methods to understand the molecular machinery in microbes that enables them to metabolize fuel. That’s the first step in mitigating the problem, Ruiz says.

A few years ago, Ruiz and coworkers studied Pseudomonas aeruginosa, a bacterium that readily decomposes jet fuel alkanes, especially those in the C11–C18 range. To thrive, the microbe needs to protect itself from toxic aromatic and cyclic hydrocarbons also found in jet fuel.

It turns out that P. aeruginosa uses efflux pumps, protein transporters in cell membranes, to drive the poisonous jet fuel components out of its cells. Armed with that information, the team showed that a peptide-based molecule could shut down the organism’s efflux pumps and prevent the bacteria from growing in jet fuel (Environ. Sci. Technol. 2013, DOI: 10.1021/es403163k). Compared with treating the fuel with a biocide that’s toxic to people, the peptide strategy, for which the team filed a patent, provides a distinct safety advantage.

To gain the same type of molecular upper hand over other fuel-degrading microbes, Ruiz and coworkers recently sequenced the genomes of several previously uncharacterized strains of microbes that they isolated from fuel environments. The list includes a strain of Nocardioides luteus, an alkane-degrading bacterium collected from soil polluted with a type of jet fuel called JP-7; Fusarium fujikuroi, a fungus recovered from Jet A fuel; and Pseudomonas stutzeri strain 19, a Gram-negative bacterium that metabolizes aromatic hydrocarbons.

“This type of biodegradation is like a fuel disease,” Ruiz says. The first step in controlling it is understanding the microorganism that causes the disease. That means determining the fuel components that a microbe is capable of decomposing and then identifying the genetic basis for that metabolic function.

Microorganisms exist everywhere, including in fuel and fuel systems, Goodson says. There are many circumstances under which they will cause a degradation problem and plenty under which they won’t.

Sorting through the good guys, the bad guys, and all the fuel types is a major project, but Goodson doesn’t let the complexity of the project bug her. “That’s the puzzle I like to work on,” she says.

Lego to launch its first sustainable, plant-based plastic bricks this year

Nick Lavars, March 4th, 2018

Lego says the new pieces made with sugarcane ethanol make up between one and two percent...

Lego says the new pieces made with sugarcane ethanol make up between one and two percent of the total plastic pieces it produces

After ramping up its efforts to shift towards more sustainable materials in 2015, perennial purveyor of plastic blocks Lego is making good on that promise and is set to launch a new form of eco-friendly playthings this year.

The toymaker established its Sustainable Materials Centre two-and-a-half years ago, announcing a US$165 million investment aimed at uncovering more sustainable production methods for its beloved plastic bricks. It had previously declared its ambition to use sustainable materials in its core products and packaging by 2030.

The first fruits of this labor will be Lego pieces crafted from a plant-based polyethylene plastic, made with ethanol sourced from sugarcane. Fitting with theme, these will be plant-oriented pieces such as leaves, bushes and trees, made 100 percent from the plant-based plastic. According to Lego, these pieces make up between one and two percent of the total plastic pieces it produces, and have been tested to ensure they offer the same degree of durability.

"Lego products have always been about providing high quality play experiences giving every child the chance to shape their own world through inventive play," says Tim Brooks, Vice President, Environmental Responsibility at the LEGO Group. "Children and parents will not notice any difference in the quality or appearance of the new elements, because plant-based polyethylene has the same properties as conventional polyethylene."

The new botanical, plant-based elements will begin appearing in Lego boxes in 2018.

Women who clean at home or work face increased lung function decline

AMERICAN THORACIC SOCIETY

Feb. 16, 2018--Women who work as cleaners or regularly use cleaning sprays or other cleaning products at home appear to experience a greater decline in lung function over time than women who do not clean, according to new research published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.

In "Cleaning at Home and at Work in Relation to Lung Function Decline and Airway Obstruction," researchers at the University of Bergen in Norway analyzed data from 6,235 participants in the European Community Respiratory Health Survey. The participants, whose average age was 34 when they enrolled, were followed for more than 20 years.

"While the short-term effects of cleaning chemicals on asthma are becoming increasingly well documented, we lack knowledge of the long-term impact," said senior study author Cecile Svanes, MD, PhD, a professor at the university's Centre for International Health. "We feared that such chemicals, by steadily causing a little damage to the airways day after day, year after year, might accelerate the rate of lung function decline that occurs with age."

The study found that compared to women not engaged in cleaning:

Forced expiratory volume in one second (FEV1), or the amount of air a person can forcibly exhale in one second, declined 3.6 milliliters (ml)/year faster in women who cleaned at home and 3.9 ml/year faster in women who worked as cleaners.

Forced vital capacity (FVC), or the total amount of air a person can forcibly exhale, declined 4.3 ml/year faster in women who cleaned at home and 7.1 ml/year faster in women who worked as cleaners.

The authors found that the accelerated lung function decline in the women working as cleaners was "comparable to smoking somewhat less than 20 pack- years."

That level of lung impairment was surprising at first, said lead study author Øistein Svanes, a doctoral student also at the Department for Clinical Science. "However, when you think of inhaling small particles from cleaning agents that are meant for cleaning the floor and not your lungs, maybe it is not so surprising after all."

The authors speculate that the decline in lung function is attributable to the irritation that most cleaning chemicals cause on the mucous membranes lining the airways, which over time results in persistent changes in the airways and airway remodeling.

The study did not find that the ratio of FEV1 to FVC declined more rapidly in women who cleaned than in those who did not. The metric is used when diagnosing and monitoring patients with chronic obstructive pulmonary disease, or COPD. The study did find that asthma was more prevalent in women who cleaned at home (12.3 percent) or at work (13.7 percent) compared to those who did not clean (9.6 percent).

The study also did not find that men who cleaned, either at home or at work, experienced greater decline in FEV1 or FVC than men who did not.

The researchers took into account factors that might have biased the results, including smoking history, body mass index and education.

Study limitations include the fact that the study population included very few women who did not clean at home or work. These women, the authors wrote, might "constitute a selected socioeconomic group." The number of men who worked as occupational cleaners was also small, and their exposure to cleaning agents was likely different from that of women working as cleaning professionals.

"The take home message of this study is that in the long run cleaning chemicals very likely cause rather substantial damage to your lungs," Øistein Svanes said. "These chemicals are usually unnecessary; microfiber cloths and water are more than enough for most purposes."

He added that public health officials should strictly regulate cleaning products and encourage producers to develop cleaning agents that cannot be inhaled.

"Study finds #lung function declines faster in women who clean at home or work than those who do not."

About the American Journal of Respiratory and Critical Care Medicine (AJRCCM):

The AJRCCM is a peer-reviewed journal published by the American Thoracic Society. The Journal takes pride in publishing the most innovative science and the highest quality reviews, practice guidelines and statements in pulmonary, critical care and sleep medicine. With an impact factor of 12.996, it is the highest ranked journal in pulmonology. Editor: Jadwiga Wedzicha, MD, professor of respiratory medicine at the National Heart and Lung Institute (Royal Brompton Campus), Imperial College London, UK.

About the American Thoracic Society:

Founded in 1905, the American Thoracic Society is the world's leading medical association dedicated to advancing pulmonary, critical care and sleep medicine. The Society's 15,000 members prevent and fight respiratory disease around the globe through research, education, patient care and advocacy. The ATS publishes three journals, the American Journal of Respiratory and Critical Care Medicine, the American Journal of Respiratory Cell and Molecular Biology and the Annals of the American Thoracic Society.

The ATS will hold its 2018 International Conference, May 18-23, in San Diego, California, where world-renowned experts will share the latest scientific research and clinical advances in pulmonary, critical care and sleep medicine.

Consumer and industrial products now a dominant urban air pollution source

New study finds surprisingly high contribution from paints, pesticides, perfumes as vehicle emissions drop

UNIVERSITY OF COLORADO AT BOULDER

.

Chemical products that contain compounds refined from petroleum, like household cleaners, pesticides, paints and perfumes, now rival motor vehicle emissions as the top source of urban air pollution, according to a surprising NOAA-led study.

People use a lot more fuel than they do petroleum-based compounds in chemical products--about 15 times more by weight, according to the new assessment. Even so, lotions, paints and other products contribute about as much to air pollution as the transportation sector does, said lead author Brian McDonald, a CIRES scientist working in NOAA's Chemical Sciences Division. In the case of one type of pollution--tiny particles that can damage people's lungs--particle-forming emissions from chemical products are about twice as high as those from the transportation sector, his team found. McDonald and colleagues from NOAA and several other institutions reported their results today in the journal Science.

"As transportation gets cleaner, those other sources become more and more important," McDonald said. "The stuff we use in our everyday lives can impact air pollution."

For the new assessment, the scientists focused on volatile organic compounds or VOCs. VOCs can waft into the atmosphere and react to produce either ozone or particulate matter--both of which are regulated in the United States and many other countries because of health impacts, including lung damage.

Those of us living in cities and suburbs assume that much of the pollution we breathe comes from car and truck emissions or leaky gas pumps. That's for good reason: it was clearly true in past decades. But regulators and car manufacturers made pollution-limiting changes to engines, fuels and pollution control systems. So McDonald and his colleagues reassessed air pollution sources by sorting through recent chemical production statistics compiled by industries and regulatory agencies, by making detailed atmospheric chemistry measurements in Los Angeles air, and by evaluating indoor air quality measurements made by others.

The scientists concluded that in the United States, the amount of VOCs emitted by consumer and industrial products is actually two or three times greater than estimated by current air pollution inventories, which also overestimate vehicular sources. For example, the Environmental Protection Agency estimates that about 75 percent of fossil VOC emissions came from fuel related sources, and about 25 percent from chemical products. The new study, with its detailed assessment of up-to-date chemical use statistics and previously unavailable atmospheric data, puts the split closer to 50-50.

The disproportionate air quality impact of chemical product emissions is partly because of a fundamental difference between those products and fuels, said NOAA atmospheric scientist Jessica Gilman, a co-author of the new paper. "Gasoline is stored in closed, hopefully airtight, containers and the VOCs in gasoline are burned for energy," she said. "But volatile chemical products used in common solvents and personal care products are literally designed to evaporate. You wear perfume or use scented products so that you or your neighbor can enjoy the aroma. You don't do this with gasoline," Gilman said.

The team was particularly interested in how those VOCs end up contributing to particulate pollution. A comprehensive assessment published in the British medical journal Lancet last year put air pollution in a top-five list of global mortality threats, with "ambient particulate matter pollution" as the largest air pollution risk. The new study finds that as cars have gotten cleaner, the VOCs forming those pollution particles are coming increasingly from consumer products.

"We've reached that transition point already in Los Angeles," McDonald said.

He and his colleagues found that they simply could not reproduce the levels of particles or ozone measured in the atmosphere unless they included emissions from volatile chemical products. In the course of that work, they also determined that people are exposed to very high concentrations of volatile compounds indoors, which are more concentrated inside than out, said co-author Allen Goldstein, from the University of California Berkeley.

"Indoor concentrations are often 10 times higher indoors than outdoors, and that's consistent with a scenario in which petroleum-based products used indoors provide a significant source to outdoor air in urban environments."

The new assessment does find that the U.S. regulatory focus on car emissions has been very effective, said co-author Joost de Gouw, a CIRES chemist. "It's worked so well that to make further progress on air quality, regulatory efforts would need to become more diverse," de Gouw said. "It's not just vehicles any more."

Authors of "Volatile Chemical Products Emerging as Largest Petrochemical Source of Urban Organic Emissions," published in Science, are: Brian C. McDonald (CIRES and NOAA Chemical Sciences Division), Joost A. de Gouw (CIRES and NOAA Chemical Sciences Division), Jessica B. Gilman (NOAA Chemical Sciences Division), Shantanu H. Jathar (Colorado State University and University of California Davis), Ali Akherati (Colorado State University), Christopher D. Cappa (University of California Davis), Jose L. Jimenez (CIRES and CU Boulder), Julia Lee-Taylor (CIRES and NCAR), Patrick L. Hayes (Universite of Montreal), Stuart A. McKeen (CIRES and NOAA Chemical Sciences Division), Yu Yan Cui (CIRES and NOAA Chemical Sciences Division), Si-Wan Kim (CIRES and NOAA Chemical Sciences Division), Drew R. Gentner (Yale University), Gabriel Isaacman (NCAR), VanWertz (Virginia Tech), Allen H. Goldstein (University of California-Berkeley), Robert A. Harley (University of California-Berkeley), Gregory J. Frost (NOAA Chemical Sciences Division), James M. Roberts (NOAA Chemical Sciences Division), Thomas B. Ryerson (NOAA Chemical Sciences Division), Michael Trainer (NOAA Chemical Sciences Division)

This research was supported by NOAA, the CIRES Visiting Fellowship Program, Aerodyne Research, Inc, the National Science Foundation and the Sloan Foundation.

CIRES is a partnership of NOAA and the University of Colorado Boulder

Europe's cities face more extreme weather than previously thought

NEWCASTLE UNIVERSITY

PUBLIC RELEASE: 20-FEB-2018

The research, by Newcastle University, UK, has for the first time analysed changes in flooding, droughts and heatwaves for all European cities using all climate models.

Published today in the academic journal Environmental Research Letters, the study shows:

a worsening of heatwaves for all 571 cities

increasing drought conditions, particularly in southern Europe

an increase in river flooding, especially in north-western European cities

for the worst projections, increases in all hazards for most European cities

Cork, Derry, Waterford, Wrexham, Carlisle, Glasgow, Chester and Aberdeen the worst hit cities in the British Isles for river flooding

Even in the most optimistic case, 85% of UK cities with a river - including London - are predicted to face increased river flooding

Using projections from all available climate models (associated with the high emission scenario RCP8.5 which implies a 2.6°C to 4.8°C increase in global temperature), the team showed results for three possible futures which they called the low, medium and high impact scenarios.

The study shows that even the most optimistic of these - the low impact scenario - predicts both the number of heatwave days and their maximum temperature will increase for all European cities.

Southern European cities will see the biggest increases in the number of heatwave days, while central European cities will see the greatest increase in temperature during heatwaves - between 2°C to 7°C for the low scenario and 8°C to 14°C for the high scenario.

For changes in droughts and floods, the cities which are affected depend on the scenario. For the low impact scenario, drought conditions only intensify in southern European cities while river flooding only worsens in north-western ones.

The British Isles have some of the worst overall flood projections. Even in the most optimistic scenario, 85% of UK cities with a river - including London - are predicted to face increased river flooding, while for the high scenario, half of UK cities could see at least a 50% increase on peak river flows. The cities predicted to be worst hit under the high impact scenario are Cork, Derry, Waterford, Wrexham, Carlisle and Glasgow and for the more optimistic, low impact, scenario are Derry, Chester, Carlisle, Aberdeen, Glasgow and Newcastle.

By 2051-2100, for the low impact scenario, cities in the south of Iberia, such as Malaga and Almeria, are expected to experience droughts more than twice as bad as in 1951-2000. While for the high impact scenario, 98% of European cities could see worse droughts in the future and cities in Southern

Europe may experience droughts up to 14 times worse than today.

"Although southern European regions are adapted to cope with droughts, this level of change could be beyond breaking point," Dr Selma Guerreiro, lead author, explains.

"Furthermore, most cities have considerable changes in more than one hazard which highlights the substantial challenge cities face in managing climate risks."

The implications of the study in terms of how Europe adapts to climate change are far-reaching, says Professor Richard Dawson, co-author and lead investigator of the study.

"The research highlights the urgent need to design and adapt our cities to cope with these future conditions.

"We are already seeing at first hand the implications of extreme weather events in our capital cities. In Paris the Seine rose more than 4 metres above its normal water level. And as Cape Town prepares for its taps to run dry, this analysis highlights that such climate events are feasible in European cities too."

80% increase in peak river flows

Of the European capitals, Dublin, Helsinki, Riga, Vilnius and Zagreb are likely to experience the most extreme rise in flooding. For the high impact scenario, several European cities could see more than 80% increases on peak river flows, including Santiago de Compostela in Spain, Cork and Waterford in Ireland, Braga and Barcelos in Portugal and Derry/ Londonderry in the UK.

Stockholm and Rome could see the greatest increase in number of heat-wave days while Prague and Vienna could see the greatest increase in maximum

temperatures during heat-waves. Lisbon and Madrid are in the top capital cities for increases in frequency and magnitude of droughts, while Athens, Nicosia, Valleta and Sofia might experience the worst increases in both drought and heatwaves.

The United Nation's Intergovernmental Panel on Climate Change (IPCC) has recognised the important role cites must play in tackling climate change and next month will hold its first Cities and Climate Change Science Conference, in Edmonton, Canada.

"A key objective for this conference," explains Professor Dawson, who sits on the Scientific Steering Committee for the IPCC Conference, "is to bring together and catalyse action from researchers, policy makers and industry to address the urgent issue of preparing our cities, their population, buildings and infrastructure for climate change."

Extreme weather: The top European capital cities which will see the greatest rise for each hazard

1 - Flooding

Dublin

Helsinki

Riga

Vilnius

Zagreb

2 - Heatwaves

Athens

Nicosia

Prague

Rome

Sofia

Stockholm

Valleta

Vienna

3 - Drought

Athens

Lisbon

Madrid

Nicosia

Sofia

Valleta

GE to develop world's largest wind turbine in France

by Geert De Clercq

PARIS (Reuters) - General Electric plans to invest more than $400 million over the next three to five years to develop the world’s biggest offshore wind turbine, which will have a capacity of 12 megawatts and stand 260 meters (853 feet) tall.

With 107-metre blades, longer than a soccer field, the Haliade-X turbine will produce enough power for up to 16,000 households, GE said in a statement.

“We want to lead in the technologies that are driving the global energy transition,” CEO John Flannery said.

GE Renewable Energy will develop and manufacture the new turbine largely in France and aims to supply its first nacelle, - or power generating unit - for demonstration in 2019 and ship the first turbines in 2021.

The firm will invest close to $100 million in a new blade manufacturing plant in Cherbourg, western France, which will open in 2018. It will also invest close to $60 million over the next five years to modernize its Saint-Nazaire factory, where the nacelles for the Haliade-X will be built.

GE, already a major global player in onshore wind, entered the offshore wind turbine market through its takeover of France’s Alstom in 2015.

GE said the new turbine - which will have a direct-drive power generator rather than a gearbox - will be 30 percent bigger than its nearest competitors.

In June 2017, MHI Vestas, a joint venture between Vestas and Mitsubishi Heavy Industries, launched a 9.5 MW offshore turbine, currently the world’s most powerful wind turbine.

Standing 187 meters tall and with 80-metre blades, it is an upgrade of MHI Vestas’ 8 MW V164 turbine, which is already in operation at the Burbo Bank Extension and Blyth offshore wind farms in Britain.

MHI Vestas has also been named preferred supplier for Britain’s Triton Knoll and Moray East offshore wind farms for a total of 190 of the 9.5 MW turbines.

An MHI Vestas spokesman declined to comment on future turbine development.

The size of offshore wind turbines - which unlike onshore turbines is not limited by overland truck transport - has grown rapidly in recent years as bigger turbines capture more wind and reduce maintenance costs and capital spending.

Onshore wind turbines in Europe have average capacities of about 2.7 megawatt, less than half the 6 MW average capacity of offshore turbines, according to trade group Wind Europe.

Reporting by Geert De Clercq; Editing by Mark Potter

Norfolk Requires Developers To Do More Against Flooding

March 1, 201810:08 AM ET

by Sarah McCammon

Norfolk, Va. is among the American cities most threatened by sea-level rise. A new zoning ordinance aims to get in front of that, but some home builders are not happy with the plan. The debate is a preview of the challenges residents of coastal areas may face in years to come, as more communities are threatened by rising seas.

For Norfolk, the rising tides are already a reality that can't be ignored.

"There are far too many evenings where the moon is full and the tide is high, where there are streets where the kayak is the preferred vehicle of choice," says George Homewood, the city's planning director.

Homewood says the city – surrounded by water on three sides and home to the world's largest naval base – has no choice but to get ready for rising seas.

"It is something that the citizens of our city are extremely and painfully aware of," he says.

Last year, the U.S. Army Corps of Engineers said Norfolk needs $1.8 billion in infrastructure improvements to reduce its flood risk. That's a huge, and possibly insurmountable, price tag. For now, the city is taking a smaller step to prepare for that increasingly wet future with its new zoning ordinance.

The regulations, taking effect March 1, require virtually all new construction – and many major home renovations – to be built on an elevated foundation, even in neighborhoods where flood insurance isn't required. It also sets up a point system that requires builders to take steps to make homes more sustainable and flood-resilient. Options include things like installing rain barrels to capture storm water, or using shatter-resistant windows to better withstand hurricanes.

Mary-Carson Stiff works in environmental advocacy and lives in a flood-prone Norfolk neighborhood. When Stiff and her husband bought the house a few years ago, sea-level rise and flooding were very much on her mind. She says she was happy to find a house on a hill, with a nice tall foundation, that's been standing for almost 100 years.

"The streets flood, but your homes may stay dry. That's certainly the case on our street," Stiff said.

Stiff can see a creek from her front porch, and says the road along it often floods during heavy rains and high tides. She supports the new ordinance, which passed the city council unanimously.

But some local real estate agents and builders say they're worried about how the new rules – which also include some aesthetic guidelines along with flood mitigation measures – will affect their bottom line.

Nick Jacovides, President of EDC Homes, says he understands that flooding is a problem here. "Many times we're tearing down homes that are in flood zones, and building them up at the new flood elevations," he says.

But Jacovides has several concerns about the new rules. He says some buyers see rain barrels as eye-sores and mosquito traps. Plus, "it's certainly going to add to the cost of doing business and building the homes," he says, and "we're going to have to pass [that] through to the customers."

It does cost a bit more to install rain barrels or build a taller foundation, for example.

But Chad Berginnis, of the Association of State Floodplain Managers, says it will pay off for home buyers in at-risk areas. He says zoning rules that take sea-level rise into account will mean lower flood insurance premiums for homeowners.

"For most property owners ... it's that monthly house payment that becomes more important," Berginnis says. "And their monthly cost of ownership could go down significantly because of these standards."

Berginnis likes that Norfolk is applying the new standards outside the high-risk zone. He points to cities like Houston, which have seen more widespread flooding in recent years, as an example of why it's important to look beyond federal flood maps when planning for disasters. Houston is debating its own flood-prevention standards, which are set for a vote later this month.

"These types of standards should protect property values for decades to come," he says.

As price drops, renewable energy surges to second place in Minnesota

Renewables are now the No. 2 source of power in state as prices come down

By Mike Hughlett Star Tribune MARCH 1, 2018 — 9:49PM

The cost of wind energy in Minnesota, even without tax subsidies, now appears lower than that of electricity produced from both natural gas and coal.

Renewable energy has moved into second place as Minnesota’s largest source of electricity generation, nudging out nuclear power but still trailing coal.

Meanwhile, the cost of wind energy in Minnesota — even without tax subsidies — now appears lower than electricity produced from both natural gas and coal.

Both conclusions come from a report released Thursday by Bloomberg New Energy Finance, which tracks power generation trends nationally. The report was presented at an event in St. Paul hosted by the Business Council for Sustainable Energy and Clean Energy Economy Minnesota, two industry-led nonprofit groups.

The Bloomberg report also found that over the past few years, even though renewable energy costs have fallen, electricity price increases in Minnesota have accelerated.

Minnesota has gone from having retail electricity prices that were a little below the national average to roughly average. The average retail electricity price in Minnesota rose 13 percent from 2013 to 2017, the Bloomberg report said.

Renewable energy made up 25 percent of the state’s electricity generation in 2017, up from 23 percent in 2016 and 21 percent in 2013. Wind power alone accounted for 18 percent of Minnesota’s generation last year, with hydro and solar comprising most of the rest of the renewable category.

Coal is still the top source of electricity in the state, but down from its share in 2012.

Nuclear power accounted for 23 percent of Minnesota’s generation mix in 2017, the same as the previous year. Coal-fired power was still king with a 39 percent share in 2017, the same as a year earlier.

However, coal’s share of the power mix in Minnesota is down from a range of 43 percent to 49 percent between 2012 and 2015. Natural gas-fired power made up 12 percent of Minnesota’s generation last year, down from 15 percent in 2016.

Coal’s decline in Minnesota mirrors national trends. Utilities are increasingly switching to natural gas, a cheaper fuel than coal. Also, state mandates have prompted a move away from coal to more environmentally friendly renewables and natural gas.

Tax breaks have made wind and solar power more attractive for power producers, but wind energy in Minnesota — and other states with rich wind resources — is a low-cost alternative even without subsidies.

In 2017, unsubsidized wind power cost $45 per megawatt hour in Minnesota on a levelized cost of electricity basis, which takes into account the cost to build a generation asset and its total power output, according to the Bloomberg analysis. Nationally, unsubsidized wind energy was $51 per megawatt hour on average, while natural gas clocked in at $49 per megawatt hour.

Bloomberg didn’t provide state cost data for coal or natural gas. Nationally, coal-fired power cost $66 per megawatt hour on average last year.

In 2017, the unsubsidized cost of solar power in Minnesota fell dramatically, the Bloomberg report said. In 2016, it was around $125 per megawatt hour; last year, it was $76 per megawatt hour.

The Bloomberg report also found that Minnesota scored well in terms of its overall energy efficiency programs and policies, ranking ninth out of all states in 2017, up a notch from 10th the previous year.

Also, Minnesota’s power sector slashed its carbon emissions 27 percent from 2005 to 2017, the report said. Still, its carbon emission rate is higher than the national average.

How America's clean coal dream unravelled

Exclusive: Kemper power plant promised to be a world leader in ‘clean coal’ technology but Guardian reporting found evidence top executives knew of construction problems and design flaws years before the scheme collapsed

by Sharon Kelly

Fri 2 Mar 2018 02.00 EST

Last modified on Fri 2 Mar 2018 12.03 EST

High above the red dirt and evergreen trees of Kemper County, Mississippi, gleams a 15-story monolith of pipes surrounded by a town-sized array of steel towers and white buildings. The hi-tech industrial site juts out of the surrounding forest, its sharp silhouette out of place amid the gray crumbling roads, catfish stands and trailer homes of nearby De Kalb, population: 1,164.

The $7.5bn Kemper power plant once drew officials from as far as Saudi Arabia, Japan and Norway to marvel at a 21st-century power project so technologically complex its builder compared it to the moonshot of the 1960s. It’s promise? Energy from “clean coal”.

“I’m impressed,” said Jukka Uosukainen, the United Nations director for the Climate Technology Centre and Network, after a 2014 tour, citing Kemper as an example of how “maybe using coal in the future is possible”.

Kemper, its managers claimed, would harness dirt-cheap lignite coal – the world’s least efficient and most abundant form of coal – to power homes and businesses in America’s lowest-income state while causing the least climate-changing pollution of any fossil fuel. It was a promise they wouldn’t keep.

Last summer the plant’s owner, Southern Company, America’s second-largest utility company, announced it was abandoning construction after years of blown-out budgets and missed construction deadlines.

“It hit us hard,” said Craig Hitt, executive director of the Kemper County Economic Development Authority. Some 75 miners, roughly half living inside Kemper County, have already been affected in a region where unemployment is 7.1% compared to a national average of just 4.1%.

“It was going to be the biggest project in the history of the county, possibly in the state of Mississippi,” Hitt said. Instead, this year, Kemper County was home to one of the first large coalmining layoffs of the Trump era.

It’s failure is also likely to have a profound impact on the future of “clean coal”. “This was the flagship project that was going to lead the way for a whole new generation of coal power plants,” said Richard Heinberg, senior fellow at the Post Carbon Institute. “If the initial project doesn’t work then who’s going to invest in any more like it?”

Company officials have blamed the failure on factors ranging from competition from tumbling natural gas prices to bad weather, bad timing and plain old bad luck.

But a review by the Guardian of more than 5,000 pages of confidential company documents, internal emails, white papers, and other materials provided anonymously by several former Southern Co insiders, plus on- and off-record interviews with other former Kemper engineers and managers, found evidence that top executives covered up construction problems and fundamental design flaws at the plant and knew, years before they admitted it publicly, that their plans had gone awry.

Their public statements helped to prolong the notion that their “clean coal” power could be affordable, costing Southern’s customers and shareholders billions, giving false hope to miners and firing dreams that American innovation had provided a path forward for “clean coal” technology at a reasonable price.

“It was exciting times, but it turned out to be like a mirage,” said Brett Wingo, a former Southern Co engineer who first went public with his concerns about Kemper’s construction delays in a front-page New York Times investigative report in 2016 and is now suing the company over alleged retaliation. “It was a cool trick – on all of us.”

Kemper’s failure will have a profound impact on international plans to slow climate change which rely heavily on the rapid development of technology to capture carbon and store it, technology that has so far shown little progress.

The United States has spent hundreds of millions in federal taxpayer funds chasing the chimera of clean coal. Donald Trump has been particularly vocal about his support for clean coal. “We have ended the war on American energy and we have ended the war on beautiful, clean coal,” he said in this year’s State of the Union address.

Kemper promised a way forward. But the documents show that while Southern Co management presented a rosy picture of Kemper’s prospects to the public, numerous structural problems with the project had emerged during construction and internal documents questioned the very foundation of the plant’s viability.

In a 24 April 2013 earnings call, for example, Southern’s CEO Tom Fanning regretted Kemper’s newly announced first budget blowout, a $540m hike, but described “tremendous progress” on construction and said “the scheduled in-service date” was achievable.

He said most of Kemper’s key components including its distinctive 174ft-tall white dome where coal would be stockpiled was already “in place”.

What was not mentioned was that two months earlier the company had discovered that the dome’s cement ceiling had started crumbling due to sloppy workmanship, and by early March had opened up a hole in the ceiling the size of a small house. The crumbling was so severe that the entire dome had to be razed and rebuilt months after Fanning touted its completion.

The dome for storing lignite coal stands next to Southern Co’s Kemper County power plant under construction near Meridian, Mississippi, on 25 February 2014.

Several former Southern engineers also explained in interviews that construction workers often lacked the right gaskets, bolts and pipe hangers necessary to connect up Kemper’s over 900,000 linear feet of pipes – but managers ordered them to install the piping anyway.

The orders were “just to show work was being done”, one engineer who worked on Kemper said, requesting anonymity because he still works in the tight-knit utility sector, and describing the inoperable maze of pipes as a “pony show” for the state’s Public Service Commission.

Bosses pressured engineers to turn in impossible cost and schedule estimates, former construction manager Kelli Williams and Wingo both recounted. They faced strong pressure to alter construction planning documents to fit budget goals, they said, even if they had strong reasons to doubt workers could actually achieve what the resulting plans required.

These construction snafus and planning pressures help explain how Kemper ballooned from a $2.4bn construction project to one that cost $7.5bn.

But Kemper’s flaws weren’t limited to construction problems. They went right to the heart of the power plant’s “clean coal” design.

Perhaps the single most important number for a power plant is its availability rate – the percentage of the time it can be up and running versus down for maintenance and repairs. Southern told the federal authorities that achieving 80% availability was a “key performance target” for Kemper, vital to proving that “clean coal” could be affordable.

After construction was under way, Southern Co hired a consultant called World Oil Services to run the numbers again. World Oil concluded Kemper’s design meant it could only be up and running on coal just 30-45% of the time during its first three to five years.

A 2 May 2014 in-house analysis also presents availability rates far lower than the company’s public numbers, concluding clean coal availability would gradually rise from 25% and not reach its “key” target for nearly a decade.

The figures suggest Southern’s plans for running an affordable “clean coal” plant were a pipe dream or, at best, unachievable for a decade. By the time Southern called it quits, Kemper had produced electricity from just coal for only about 100 hours.

Williams, who left the company on good terms and now runs her own construction consulting firm, told the Guardian that World Oil’s analysis was provided to Southern as early as 2012. The company buried World Oil’s report, she explained, adding that for years, “trying to find it was like needle-in-a-haystack stuff”.

Southern was encouraged to spend big due to a quirk in the way that the power industry is regulated. Monopoly utilities can make money by spending money, because the law allows them to collect a virtually guaranteed percentage of their construction costs as profit. These incentives were supercharged, critics allege, by a wave of state laws passed in Mississippi and other southern states in the mid-2000s, which allowed utilities to collect reimbursements for construction while projects were still being built, instead of having to wait until construction was finished.

“When I first started, I was told you’ll get in as much trouble at this company saving $5m as you’ll get in for spending $5m,” Williams said.

From the start, Mississippi’s Public Service Commission agreed that Kemper could be built if the company could keep construction costs below a “soft cap” of $2.4bn and a “hard cap” of $2.88bn.

The company repeatedly assured regulators and the public that they were “confident” they could build the plant on budget. But internally there were early doubts at the highest levels even before a single shovel was turned.

Ed Day, CEO of Southern’s Mississippi Power subsidiary which oversaw the plant, wrote in a May 4 email about worries he had before construction began

“I am less than full confident about a $2.8 [bn] cap on Kemper,” Ed Day, CEO of Southern’s Mississippi Power subsidiary, which oversaw the plant, wrote in a May 4, 2010 email to his closest circle. Day listed numerous reasons he doubted his company could build Kemper within the budget demanded by state regulators. “We had gotten comfortable with a cap of $3.2[bn] with a start date of May 1st, but now we don’t have a start date.”

The first hint of the real trouble for Kemper came on 15 December 2011 when meeting notes mention the discovery of small “air bubble cracks” in the thick heat-resistant lining inside the power plant’s twin hearts, its gasifiers – the units that convert coal into flammable gas in a feat of chemistry requiring temperatures over 1,800F and pressures higher than those at 1,500ft deep in the ocean. To keep that hellish heat and pressure inside, it was vital for the concrete-like insulation lining the steel tank walls to be solid.

Then in February 2012, engineers discovered that the concrete lining inside the gasifer was suffering from “explosive spalling” – laced with tiny pockets of moisture that turned to steam under high heat, causing the concrete shell to pop and crack – and no one could say what had caused the bubbling. “I mean immediately, right there, we knew we had at least a three-to-six-month [schedule] slip when the refractory [lining] failed again,” recalled former Kemper engineer Wingo.

As 10 April 2012, the date when the first piece of the gasifier was scheduled to be lifted into place by a massive crane, rapidly approached engineers still had no clear idea how they would fix the cement insulation.

And if the two gasifers were late, that would set the entire project behind. Until a gasifier was installed, workers couldn’t weld the power plant’s steel structure into place around it. And they couldn’t lift the blue steel outer shell into place and fix the cracks later.

Cranes stand at the construction site for Southern Co’s Kemper County power plant on 25 February 2014. Photograph: Bloomberg/Bloomberg via Getty Images

The clock was ticking – at a construction site that engineers were told cost over a million dollars a day just to keep running. Adding to the pressure: Kemper’s budget relied on collecting $133m in federal tax incentives that would be lost if Kemper wasn’t up and running by May 2014 – and the company already hadn’t left themselves a single day to spare in Kemper’s tight schedule.

Meanwhile the Sierra Club environmental group had won a stunning – if short-lived – victory with a lawsuit aimed at overturning Kemper’s state permit. On 15 March 2012 the state supreme court in Jackson, Mississippi, ruled in favor of the environmentalists on a technicality and revoked the paperwork Kemper needed to go forward with construction.

“They were completely blindsided,” said Williams. “They had no plan for that to happen.”

Making things even worse, on 2 April, managers had finished running their newest construction budget, numbers that state regulators expected to be handed over in about a month, and Kemper was already just $15m shy of its $2.88bn budget cap – not counting the mounting million-dollar-a-day costs of the gasifier delays.

That was a huge problem. “If the Public Service Commission knew costs were running over $2.8bn,” said Charles Grayson, a fiscal conservative whose Bigger Pie Forum has long opposed Kemper on financial grounds, “they would have had to scrap the whole project right there.”

Throughout April, Kemper’s management scrambled to find ways to slash costs, at least on paper. “They gave us goals like, take $50m out, take $200m out,” said Williams. But with even small budget cuts, company files warned of a “severe risk” of missing the 2014 construction deadline.

Some relief for the company finally came on 24 April, when state regulators re-issued Kemper’s paperwork, circumventing the Sierra Club ruling – but they again insisted on a $2.88bn “hard cap” for the budget. And Southern assured the regulators they could stay under that line.

Matters came to a head at a 30 April executive review board meeting, which included Fanning. The documents show the executive review board was presented with a long list of risks associated with whittling Kemper’s official budget down from $2.865bn, and a presentation points out in bold that the impacts of construction delays were left out (which would drive costs up).

Fanning made his call: the next month the company told regulators again that it would stay under budget. But key information was not passed on. On 8 May, general manager Steve Owen sent an email to his staff requesting detailed spreadsheets for his upcoming independent monitor presentation “without any reference to the April 2, 2012 base reforecast”, which suggested the plant would bust its budget.

State regulators smelled something amiss and demanded an audit, which was conducted by Burns and Roe Enterprises, an engineering firm.

The audit’s 15 August 2012 draft report was damning, citing concerns over a subcontractor’s “lack of internal project controls”, concluding that “[c]ompany statuses are optimistic” about design, engineering and construction progress, and describing delays at a Chinese project designed around the same technology as Kemper.

But auditors allowed Southern to extensively comment on that draft, watering down its conclusions and delaying its publication until November 2012 –continuing construction in the meantime.

In April 2013, the company told regulators there had been a fight inside Southern, pitting those who projected Kemper would cost as much as $2.865bn (crucially just shy of the hard cap), against Ed Day and Tommy Anderson at Mississippi Power Company, who insisted it could be built for $2.49bn.

“Stark differences, albeit with very logical bases, were presented and maintained by both sides and neither appeared willing to budge significantly from their initial positions,” the company wrote, around the time that Ed Day and Tommy Anderson, who ran Southern’s subsidiary Mississippi Power, resigned. “As a result, no set of assumptions and no point estimate were agreed upon.”

It wasn’t until May, Mississippi Power explained, that they had concluded that Kemper would cost somewhere in-between, reaching $2.76bn. In documents later presented to state regulators, Southern offered up a 4 April chart showing a $2.7bn projection with a 95% probability of staying below $2.865bn.

Southern’s subsidiary, Mississippi Power, was forced to admit that it had been wrong to take so long to provide regulators with an explanation – but their internal documents show that back in 2012, when its current state certificate was issued, Southern knew much more than it has ever publicly admitted.

“There was no intentional withholding of information,” Mississippi Power’s new CEO, Ed Holland, told the Associated Press in 2013, as Ed Day’s resignation was made public.

“We made a mistake of not delivering in a timely fashion,” Holland added, which was widely read as admitting the company was late reporting its $366m bump from $2.4bn up to the $2.76bn estimate. By June 2017 the gig was up. Costs had spiraled to $7.5bn and the company had revealed that equipment still required costly repairs.

Southern announced in August that it would recognize an additional $2.8bn loss as it suspended construction at Kemper, meaning that the company’s shareholders have largely borne the brunt of the project’s final costs.

Asked for comment, Southern declined to respond to specific questions. Schuyler Baehman, a Southern Co spokesman, instead pointed to a 6 February settlement over Kemper’s costs and recent tax law changes, which he said would reduce customers’ bills. “We look forward to the continued operation of Kemper’s efficient natural gas facility,” Baehman wrote, “which has been reliably serving Mississippi Power customers for more than three years.”

The abandonment of clean coal at Kemper this summer may have long-running ramifications for the international response to climate change. It was slated to be the world’s largest coal carbon capture plant, touted as potentially the first of many similar projects around the world, and the only built-from-the-ground-up coal plant with carbon capture included right from the drawing board.

If Kemper ran aground due to fundamental design flaws rather than poor market timing or mishaps that could be avoided next time, then the prospects for affordable clean coal and carbon capture may be even more distant than many countries are counting on to meet international climate agreements.

Instead of delivering the promise of clean coal Kemper has shattered them, in the meantime leaving locals out of work and shareholders out of pocket.

“We told them years ago that this was a facade,” said Louie Miller, director of the Mississippi chapter of the Sierra Club. “I think there’s going to be a big scrap metal sale at some point.”

Why Is California Rebuilding in Fire Country? Because You’re Paying for It

After last year’s calamity, officials are making the same decisions that put homeowners at risk in the first place.

By Christopher Flavelle

‎March 1, 2018

At the rugged eastern edge of Sonoma County, where new homes have been creeping into the wilderness for decades, Derek Webb barely managed to save his ranch-style resort from the raging fire that swept through the area last October. He spent all night fighting the flames, using shovels and rakes to push the fire back from his property. He was even ready to dive into his pool and breathe through a garden hose if he had to. His neighbors weren’t so daring—or lucky.

On a recent Sunday, Webb wandered through the burnt remains of the ranch next to his. He’s trying to buy the land to build another resort. This doesn’t mean he thinks the area won’t burn again. In fact, he’s sure it will. But he doubts that will deter anyone from rebuilding, least of all him. “Everybody knows that people want to live here,” he says. “Five years from now, you probably won’t even know there was a fire.”

As climate change creates warmer, drier conditions, which increase the risk of fire, California has a chance to rethink how it deals with the problem. Instead, after the state’s worst fire season on record, policymakers appear set to make the same decisions that put homeowners at risk in the first place. Driven by the demands of displaced residents, a housing shortage, and a thriving economy, local officials are issuing permits to rebuild without updating building codes. They’re even exempting residents from zoning rules so they can build bigger homes.

State officials have proposed shielding people in fire-prone areas from increased insurance premiums—potentially at the expense of homeowners elsewhere in California—in an effort to encourage them to remain in areas certain to burn again. The California Department of Forestry and Fire Protection (Cal Fire) spent a record $700 million on fire suppression from July to January, yet last year Governor Jerry Brown suspended the fee that people in fire-prone areas once paid to help offset those costs.

Critics warn that those decisions, however well-intentioned, create perverse incentives that favor the short-term interests of homeowners at the edge of the wilderness—leaving them vulnerable to the next fire while pushing the full cost of risky building decisions onto state and federal taxpayers, firefighters, and insurance companies. “The moral hazard being created is absolutely enormous,” says Ian Adams, a policy analyst at the R Street Institute, which advocates using market signals to address climate risk. “If you want to rebuild in an area where there’s a good chance your home is going to burn down again, go for it. But I don’t want to be subsidizing you.”

The building boom has vastly increased the potential damages from fire. In 1964 the Hanly Fire in Sonoma County destroyed fewer than 100 homes. Last fall the Tubbs Fire, which covered almost the same ground, destroyed more than 5,000 homes and killed 22 people. And though it was the most destructive, the Tubbs Fire was just one of 131 across California in October. By the end of the year, more than 1 million acres had been burned and 10,000 buildings destroyed.

The next fire may be worse. Sonoma has begun issuing building permits for houses to go back up. Rather than strengthening building codes, the county has weakened rules—passing temporary ordinances that let people expand their homes beyond their previous size and waiving development fees for new units. Other areas hit by fires in 2017, including Anaheim and Ventura, have similarly exempted homeowners from some ­zoning rules.

Susan Gorin, one of Sonoma’s five elected county supervisors, represents some of the areas hardest hit by the fires. While acknowledging the dangers, Gorin, who lost her own home, says the county has made clear that residents will be allowed to rebuild. “One could make the argument that people were not meant to live in those environments,” she says. But she doesn’t feel it’s her place to make that call. “From my perspective, it is very difficult for governments or anyone to tell another person, another property owner, that they could not, should not, rebuild.”

One of the issues, in Gorin’s view, is fairness: Why should the people whose homes burned be penalized and not those whose homes were spared? “It’s pretty hard to say what is defensible and what is not defensible,” she says. “Are we telling the fire victims that they can’t rebuild, but we’re going to let the other folks remain in place?”

Ray Rasker has been thinking about this issue for years. As a consultant in Montana, he advises governments on how to reduce the damage from wildfires while allowing for development. He says officials’ motive for allowing rebuilding has less to do with fairness than it does with money. “The economics of the West have changed,” he says. Twenty years ago, local budgets came largely from resource extraction, including lumber, oil and gas, and mining. “The new cash cow is homebuilding,” Rasker says. “When they look at a new subdivision, they’re thinking tax revenues. And that clouds their judgment.”

Rasker says part of the problem is perverse incentives: Local officials know that most of the cost of fire suppression and disaster recovery will be paid by state and federal taxpayers. But he warns that putting homes at the forest’s edge costs communities more than they realize. “The suppression cost is about 10 percent of the total,” he says. The rest is harder to see: reduced home values, lost tax revenue, higher insurance costs, closed businesses, even higher medical bills. He estimates half the cost of a wildfire is ultimately borne by the community affected.

Nonetheless, local officials almost always decide that rebuilding makes sense despite the risk that the houses will burn again. Anu Kramer, a researcher at the University of Wisconsin at Madison, looked at what happened to 3,000 buildings destroyed by wildfires in California from 1970 to 1999. She found that 94 percent were rebuilt. The result is that fires consistently—and predictably—destroy homes in the same place.

“I’ve heard firefighters tell me that they’ve been on the line, and they’d have this eerie sense of déjà vu—they’d think, I’ve been here before,” says Christopher Dicus, a professor of wildland fires and fuels management at California Polytechnic State University. “The key is to avoid building in places that are known fire hazards.”

Nobody illustrates the tension in California’s wildfire policy better than Dave Jones. As a state legislator a decade ago, he introduced a bill requiring local governments to account for fire protection. It was vetoed by then-Governor Arnold Schwarzenegger, who said Cal Fire “should not be in the position of having to act as a local land agency.” As a result, the state fire agency is “now fighting to defend subdivisions in high-risk fire areas that they never had to defend before,” Jones says. “The state has to pick up the cost of that. There’s an externalization of the cost of those decisions.”

Jones has since become California’s insurance commissioner. In January he asked lawmakers for more authority to stop insurers from raising premiums or dropping customers in response to the growing risk. Jones says his position hasn’t changed and that he still supports using local planning and regulation to limit development in fire-prone areas. He just thinks higher premiums are, by contrast, a “crude tool.” The state’s insurance industry, which faces a record $12 billion in claims after last year’s wildfires, warns that tying insurers’ hands will only push costs onto homeowners elsewhere in California. “Of course, people who choose to live in the forest and local governments that continue to approve development in the forest would like less fire-prone areas to subsidize them,” says Rex Frazier, president of the Personal Insurance Federation of California, which represents the industry. “But that just solves one problem by creating another.”

Even attempts at small fixes have failed. In 2011 the legislature levied a fee on people who live in “state responsibility areas”—zones outside cities where the state, rather than local officials, carries the burden of dealing with wildfires. But the fee, which topped out at about $150, proved deeply unpopular among the 800,000 or so households that paid it. Last July the governor scrapped it, replacing the revenue with state funds. Brown’s office declined to comment.

Tea cups arranged on the steps of a burned home on Thomas Lake Harris Drive in Santa Rosa, with the Fountaingrove Club’s golf course in the background.Photographer: Lucas Foglia for Bloomberg Businessweek

California is just the most obvious example of a national dilemma. In 2013, after 19 firefighters died fighting a wildfire in Arizona, officials in Montana decided it was no longer fair for homeowners to put firefighters at risk by deciding to live at the forest’s edge. The Lewis and Clark Rural Fire Council issued a resolution declaring it wouldn’t go out of its way to save homes in the “Wildland/Urban Interface.” The resolution was written by Sonny Stiger, a retired U.S. Forest Service employee. “There is only one way to say this: No, we are not going to risk our lives to save your house,” he said when the resolution was released. The response from homeowners was swift and furious: Hate email began pouring in almost immediately.

Still, Stiger says, using regulations to keep people from building in dangerous areas is even harder. “When you say ‘zoning regulations,’ people almost come unscrewed.” The only way to produce real change is for the federal government to get stricter about the places it will save from fires, he adds: “The Forest Service is going to have to say, ‘From this date on, we’re not protecting those homes.’ ”

In 2016, Rasker, the Montana consultant, took that message to the Obama administration. Invited to the White House for a meeting about wildfires, he argued that local officials were issuing building permits near national forests and parks knowing that federal forest firefighters were likely to protect them—and pick up the cost. “There’s this disconnect between local land-use authority and what happens when things go wrong,” Rasker recalls saying. He urged officials to seek ways to increase federal grants and other payments to communities that made smart land-use decisions and cut funds for those that didn’t. The White House staff said they agreed with him, Rasker says. Nothing changed.

All the houses on Crescent Circle in Santa Rosa burned. Some on the next block were untouched.Photographer: Lucas Foglia for Bloomberg Businessweek

Tom Harbour was at that meeting as director of the U.S. Forest Service under Obama. He says until local communities get clearer financial signals about the risk of building in vulnerable areas, there’s not much the federal government can do about it. “We’ve designed the system to incentivize and compartmentalize those kinds of behaviors,” he says. “We look only to tomorrow, instead of 10 or 50 years out.”

Back in Sonoma, Kat Geneste, a retired police officer, wanders through the ruins of the house she bought four years ago. October’s fire was the third she’s lived through, she says, and by far the worst. She recalls flames surrounding her car, hearing propane tanks exploding every few seconds, and even seeing animals on fire. At one point, she was certain she’d die. “I kind of kissed my ass goodbye.”

Still, Geneste says she hopes to rebuild her home, then sell it later, once prices recover. Like Webb, she’s not worried that the fires will scare off buyers. If anything, she expects the houses that go up will be even bigger and better than the ones they replace. After what she went through, does she have any advice for people considering whether to live in the area? Yes: “It’s a good time to buy.”

An indoor chemical cocktail

Sasho Gligorovski , Jonathan P. D. Abbatt

Science 09 Feb 2018:

Vol. 359, Issue 6376, pp. 632-633

DOI: 10.1126/science.aar6837

Reactive chemicals in an indoor environment arise from cooking, cleaning, humans, sunlight, and outdoor pollution.

In the past 50 years, many of the contaminants and chemical transformations that occur in outdoor waters, soils, and air have been elucidated. However, the chemistry of the indoor environment in which we live most of the time—up to 90% in some societies—is not nearly as well studied. Recent work has highlighted the wealth of chemical transformations that occur indoors. This chemistry is associated with 3 of the top 10 risk factors for negative health outcomes globally: household air pollution from solid fuels, tobacco smoking, and ambient particulate matter pollution . Assessments of human exposure to indoor pollutants must take these reactive processes into consideration.

A few studies illustrate the nature of multiphase chemistry in the indoor environment. As Sleiman et al. have shown , a highly carcinogenic class of compounds—the tobacco-specific nitrosamines—forms via the reaction of gas-phase nitrous acid (HONO) with cigarette nicotine that is adsorbed onto indoor surfaces similar to those in a typical smoker's room. HONO is also produced indoors directly by other combustion sources such as gas stoves and by the gas-surface reactions of gaseous nitrogen oxides on walls, ceilings, and carpets . Likewise, carcinogenic polycyclic aromatic hydrocarbons (PAHs) and their often more toxic oxidation products are mobile, existing both on the walls of most dwellings and in the air ; PAHs arise from combustion sources such as smoking and inefficient cookstoves. This is a particularly important issue in developing countries, where the adverse health effects from cooking with solid fuels is a leading cause of disease . As another example, use of chlorine bleach to wash indoor surfaces promotes oxidizing conditions not just on the surfaces being washed but throughout the indoor space . Reactive chlorinated gases (such as HOCl and Cl2) evaporate from the washed surface, can oxidize other surfaces in a room, and may be broken apart by ultraviolet (UV) light to form reactive radicals.

It is not only human activities such as cooking, smoking, and cleaning that affect the indoor environment. The mere presence of humans affects the oxidative ability of the air. Wisthaler and Weschler have shown that human occupancy can dramatically affect ozone levels, such that concentrations of this oxidant dropped by half within 30 minutes when two people entered a test chamber similar in size to a typical indoor room. At the same time, the concentrations of various carbonyl compounds increased. The reactions are so fast that many chemically reactive oils on human skin may be transformed into more oxidized molecules on time scales of tens of minutes . Recent measurements in heavily occupied spaces such as classrooms illustrate the wealth of human emissions of not only naturally formed molecules (such as small organic acids) but also personal care products (such as siloxanes) . A key uncertainty is the degree to which indoor environments are oxidizing. Outdoor light levels drive the production of radicals, such as hydroxyl (OH), which act as atmospheric cleansing agents. Without high levels of UV light indoors, what levels of OH will be present? Gómez Alvarez et al. have reported the detection of indoor OH radicals, formed from the sunlight-driven decomposition of nitrous acid ; they observed OH concentrations similar to those that form outdoors

These findings have placed emphasis on indoor radical chemistry. It remains unclear, however, whether light is necessarily involved, or whether dark sources of OH from the oxidation of gas-phase alkenes by ozone dominate instead. In particular, terpene oils, which are components of cleaning, fragrances, and cooking materials, are widely present indoors. Ozonolysis of such alkenes leads to the formation of highly reactive molecules, the Criegee intermediates. The latter sometimes decompose to form OH radicals but in other circumstances may react with a wide range of other indoor constituents. In 2017, Berndt et al. reported the direct mass spectrometric measurement of gas-phase Criegee intermediates ; this work opened up the potential for measuring such species in indoor environments. Criegee intermediates may also form when indoor surfaces that are coated with chemically unsaturated skin cells and cooking oils are exposed to ozone. Although largely unstudied, such chemistry may form highly reactive and potentially harmful products, including peroxides and ozonides, on indoor surfaces .

The atmospheric chemistry field has undergone a dramatic transformation in its understanding of how volatile organic compounds (VOCs) are oxidized . Highly oxidized organic compounds arise via autooxidation mechanisms initiated by either ozone or radical attack. Reaction with a single oxidant molecule can form multiple oxygenated functional groups on an organic reactant within seconds, changing it from a volatile gas to a molecule that will condense to form secondary organic aerosol (SOA) particles . Given that levels of VOCs, such as terpenes, can be much higher indoors than outdoors, this pathway may be an important indoor aerosol formation mechanism. Because of very high levels of outdoor pollution, VOC concentrations in industrially developing areas such as China may be much higher than in European and North American homes.

The building science research community has long identified the importance of ventilation for the state of indoor environments. Open windows expose us to outdoor air, whereas well-sealed houses are subject to emissions from furnishings, building materials, chemical reactions, and people and their activities. Climate change and outdoor air pollution are leading to efforts to better seal off indoor spaces, slowing down exchange of outdoor air. The purpose may be to improve air conditioning, build more energy-efficient homes, or prevent the inward migration of outdoor air pollution. As exposure to indoor environments increases, we need to know more about the chemical transformations in our living and working spaces, and the associated impacts on human health.

Dutch plan to build giant offshore solar power farm

by Reuters

Wednesday, 14 February 2018 16:16 GMT

"There is more sun at sea and there is the added benefit of a cooling system for the panels, which boosts output by up to 15 percent"

By Anthony Deutsch

AMSTERDAM, Feb 14 (Reuters) - An offshore seaweed farm in the North Sea will be turned into a large solar power farm that aims to pipe energy to the Dutch mainland in roughly three years.

The project comes at a critical time for the Netherlands, which is struggling to curb fossil fuel use and meet greenhouse gas emission targets after years of underinvestment in renewable energy sources.

After an initial pilot next year, a consortium comprising energy producers, scientists and researchers plans to ultimately operate 2,500 square metres of floating solar panels by 2021, said Allard van Hoeken, founder of Oceans of Energy, which devised the project.

The pilot, which will have 1.2 million euros ($1.48 million) in government funding, will operate 30 square meters of panels from this summer. It will test equipment, weather conditions, environmental impact and energy output.

Utrecht University will examine energy production at the offshore prototype, located around 15 kilometres (nine miles) off the coast of Dutch city of The Hague at a testing zone known as the North Sea Farm.

"In addition to removing the problem of a land shortage, there are several other benefits to building at sea, similar to those in wind energy," said solar energy expert Wilfried van Sark at Utrecht University, who is involved in the project.

"There is more sun at sea and there is the added benefit of a cooling system for the panels, which boosts output by up to 15 percent," he said.

If successful, there is plenty of space to expand the farm, unlike on the overcrowded Dutch mainland where there has been public opposition to wind turbines.

The panels will be more rugged than ordinary onshore models to account for the harsher weather conditions and tidal shifts at sea, Van Sark said.

The panels will be moored between existing wind turbines and connected to the same cables, transporting energy efficiently to end users.

Van Hoeken said he expects offshore solar energy to eventually be cheaper than offshore wind and mainland power sources, due mainly to a lack of land costs.

($1 = 0.8114 euros) (Editing by Kirsten Donovan)

Gone with the wind: storms deepen Florida's beach sand crunch

Laila Kearney

FLAGLER COUNTY, Fla (Reuters) - Down the palm tree-lined roads of northeast Florida’s Flagler County, a half-dozen dump trucks are shuttling back and forth along the Atlantic coast pouring thousands of tons of sand onto the local beach.

Replacing sand swept away by waves and wind is critical work to protect seaside homes and businesses as well as the tourism dollars brought by northerners seeking refuge from the cold in the Sunshine State.

Getting enough of it, for the right price and in time for the peak tourist season, has become much harder after a violent storm year that brought Irma, the most powerful hurricane to hit the state in over a decade, and saddled Florida with more than $50 billion in damage.

Costs of so-called beach renourishments are a fraction of the total, measured in hundreds of millions of dollars, but the effort is crucial for Florida’s $67 billion tourism industry. And while sand needs are surging, there is not enough to go around.

“It’s like the slow progression of tooth decay versus a fight where someone knocks out your teeth all at once,” Flagler County Administrator Craig Coffey said, referring to sand lost during Irma and Hurricane Matthew, which buffeted Florida’s coast in October 2016.

With the longest coastline of any mainland U.S. state, more money and time is spent fixing up Florida’s shores - widening and building dunes - than in any other state.

But after seven decades of rebuilding its beaches, the state is now struggling with sand shortages, rising costs and tight public funds even during calmer years. The quick succession of powerful storms makes the challenges even more daunting.

By one estimate, based on a sample of beaches, Irma knocked out four times the amount of sand Matthew displaced, U.S. Army Corps of Engineers spokesman John Campbell said. Matthew was already considered one of the worst storms in recent memory.

As weather patterns change and coastal development increases, more states have rolled out programs to counter beach erosion over the past five years.

Other nations, including Mexico, Britain and Australia, also regularly fix up their shores. High demand for sand in the construction industry further strains global supply.

As needs and costs rise in Florida, communities are increasingly competing both for sand and funding, with some retaining “sand lobbyists” to represent them in state and federal legislatures.

Flagler County tried for more than a decade to get the federal sand funds used for popular beaches like Miami before turning to local tax dollars, private money and emergency aid to rebuild dunes and protect neighborhoods flooded in Matthew, Irma and several nor‘easters since. The estimated $26 million project began late last month.

That back-to-back strike of storms has pushed counties to reach for sand sources all at once, driving up prices.

South of Flagler, Brevard County wanted to expand a contract it awarded after Matthew to also cover post-Irma needs at the original price, but the contractor rejected the deal.

New bids came in 11 percent to 39 percent higher and the county settled for the lowest offer, said County Commissioner John Tobia, who wants some of the local tax money spent on sand to be used repairing the county’s damaged roads instead.

Federally-protected sea turtles nest along Florida’s east coast and the laws prohibit any work during the nesting period from May through October.

Environmental rules also prescribe what type of sand can be used, since its color affects the temperature - the darker, the faster it warms - and that in turn can change the gender of the turtles before hatching.

As useable offshore sand sources get depleted and tapping into new sites involves lengthy permitting, more local governments are trucking sand from mines - instead of dredging it from the seabed and piping it onshore - even though it can cost five times more per cubic yard.

“With the shrinking sand supply, it leads to conflict,” said Dave Bullock, who retired last month as town manager for Florida’s western barrier island of Longboat Key, which used up the rest of its offshore reserves after Matthew.

In a recent example of that clash, two neighboring beach communities, Siesta Key and Lido Key are facing off in a lawsuit over which can claim 1.8 million tons of sand from a common boating channel.

Environmental advocates argue that beach erosion is primarily a natural phenomenon and efforts to reverse it create a vicious-circle by encouraging building along the shore.

That in turn puts more people and public resources at risk and calls for greater efforts and money to protect them.

The long-term, lasting solution would be to roll back coastal development, environmental activists argue.

Still, needs are likely to grow, says Derek Brockbank, executive director at the American Shore and Beach Preservation Association, which lobbies for coastal governments and businesses.

Climate change and coastal development have created an urgent need to protect the upland, Brockbank said, calling for $5 billion to be set aside over the next decade in any upcoming federal infrastructure bill.

Biochar could replace unsustainable peat moss in greenhouse industry

by Staff Writers

Urbana, IL (SPX) Feb 15, 2018

Plant lovers are familiar with peat moss as the major component of potting mix, but harvest of the material is becoming unsustainable. Not only is peat being removed faster than it can re-form, its use in potting mix contributes to the release of carbon dioxide into the atmosphere.

"Peat bogs naturally store carbon. When peat moss is harvested, there's a transfer of a global carbon sink into a net source. That's because within a couple growing seasons, most of the peat moss from the potting mix is either mineralized by microbes or thrown out and decomposed. Either way, carbon dioxide is released," says Andrew Margenot, assistant professor in the Department of Crop Sciences.

In a recent study, Margenot and colleagues from the University of California, Davis investigated a material called biochar as an alternative to peat moss in potting mix. Similar to charcoal, biochar is produced through a process called pyrolysis, or heating to high temperatures in the absence of oxygen. And like charcoal, it can be derived from virtually any organic substance.

"In our study, we used one made from softwoods from selective logging. But biochars can be made from corn stover, switchgrass, and lots of other organic waste products," Margenot says.

"Biochar could even be made from a greenhouse operation's own waste, if there are trimmings from plants or old peat moss." Margenot emphasizes that 'biochar' refers to a very broad class of material that can vary greatly in its properties depending on the pyrolysis temperature and the feedstock used.

When organic material decomposes naturally, the process releases carbon dioxide. But biochar decomposes very slowly - potentially on the order of centuries - so when organic material is turned into biochar, the carbon is essentially sequestered and can't escape back into the atmosphere.

But how well does it work in potting mix? To find out, Margenot and his team grew marigolds from seed to flower in a number of experimental potting mixtures that replaced peat moss with an increasing proportion of commercially available softwood biochar.

In the biochar mixtures, pH soared.

"The ones with lots of biochar had a pH up to 10.9, which is ridiculous for trying to grow things," Margenot says. But this wasn't unexpected for the type of biochar the researchers used.

Marigolds grew and flowered just fine, even when biochar replaced all of the peat moss in the potting mix. However, for plants growing with high concentrations of biochar, the early stages were a struggle.

"You could see that the plants took a hit in the early stages of growth - the first two to three weeks. They were shorter and had less chlorophyll, indicative of a nitrogen deficiency, which you'd expect at such a high pH. But these plants caught up by the end. By flowering stage, there was no negative effect of biochar versus peat moss," Margenot says.

Not only did the plants suffer no long-lasting negative effects of the biochar, the pH in those pots neutralized by the end of the study. Margenot thinks this could have been due to a natural process of ion exchange between plant roots and potting mix, naturally occurring carbonates in the irrigation water, or the use of industry-standard fertigation - irrigation with low levels of dissolved nutrient ions such as nitrate and phosphate - in the experiment.

Although he only tested one type of biochar in one type of plant, Margenot is optimistic about the promise of biochar in nursery applications.

"Because we used a softwood biochar known for its high pH, we really tested the worst case scenario. If it could work in this case, it could probably work with others."

The article, "Substitution of peat moss with softwood biochar for soil-free marigold growth," is published in Industrial Crops and Products. Margenot's co-authors, all from UC Davis, include Deirdre Griffin, Barbara Alves, Devin Rippner, Chongyang Li, and Sanjai Parikh.

Biotechnologists look to bacteria in extremely cold environments for 'green' detergents

by Staff Writers

London, UK (SPX) Feb 15, 2018

Despite subzero temperatures, increased UV radiation, little liquid water, and few available nutrients, bacteria living at Earth's poles thrive. They manage it thanks in part to molecules called biosurfactants, which help them separate the complex substrates they feed on into easy-to-metabolize droplets. On February 7 in the journal Trends in Biotechnology, researchers review the hypothetical uses of these cold-loving molecules for "green" detergents, fuel additives, and other applications.

"They really have a tremendous potential," says microbiologist and biotechnologist Amedea Perfumo of the GFZ German Research Centre for Geosciences. Biosurfactants are safe to release into the environment and can be produced using affordable waste products such as olive oil byproduct and cooking oils. They also work in lower concentrations, so we need less of them to get the same job done. But the ones produced by extremophilic bacteria have what Perfumo calls "an extra feature": they work at freezing temperatures.

This stability has huge implications for how these molecules could be used. Biodiesel, which is produced from waste materials and cleaner burning than gasoline, might be a viable fuel alternative if a biosurfactant additive could improve its sluggish flow in cold temperatures.

Cold-active biosurfactant detergents would mean we could go green by reducing washing temperatures, without worrying that our clothes wouldn't get clean. These biosurfactants could also be used to harvest natural gas from cage-like ice crystals called gas hydrates or to clean up pollution spills in colder regions of the ocean.

According to Perfumo, there has never been a better time than now to advance research into these biotechnological applications.

"The cold regions of our planet are actually becoming more reachable for exploration and for scientific research," she says. And the growing availability of extremophilic bacteria in culture collections has also improved accessibility.

"Scientists who don't have the option to go personally to the polar regions and take samples can simply get organisms from culture collections. It's in reach for everybody."

Cold-active enzymes, which are also produced by extremophilic bacteria, have already begun to be produced industrially. When asked why this isn't true of cold-active biosurfactants, Perfumo doesn't have a good answer.

"It's quite a question mark for me," she says, because she sees so much potential for these molecules. She does acknowledge, however, that there is still a lot of work that needs to be done to determine the most useful bacteria, the conditions at which they will produce the highest yields, and whether it might be possible to produce biosurfactants as part of the process that produces enzymes.

"We still only know a little," she says. Nonetheless, she's hopeful.

"I think that with a little work and a little patience and especially with joint forces, we can take a bold step in the near future. It will really be a grand challenge for science and technology."

Research Report: "Going Green and Cold: Biosurfactants from Low Temperature Environments to Biotechnology Applications"

Argonne and Energy Vision demonstrate Renewable Natural Gas as transport fuel

New York, NY (SPX) Feb 16, 2018

The US Department of Energy's Argonne National Laboratory and the sustainable energy NGO Energy Vision has released two case studies assessing the results of pioneering projects that were among the first to produce Renewable Compressed Natural Gas (R-CNG) vehicle fuel, by using anaerobic digesters to capture the biogases from decomposing organic waste.

Energy Vision and Argonne produced the studies jointly. One study looks at Fair Oaks Farms, a large dairy cooperative in Indiana with roughly 36,000 cows. It converts manure to R-CNG using a large anaerobic digester, and uses the fuel to power its milk tanker trucks.

The other study assesses the Sacramento BioDigester, the first food-waste digester in California to turn commercial organic waste into R-CNG vehicle fuel using anaerobic digestion.

"These projects are trail blazers, and their experience bodes well for the future of renewable natural gas," said Matt Tomich, president of Energy Vision and co-author of the case studies.

"Their success can serve as models for other places with large organic waste streams, which is virtually every urban and rural setting in the country."

Nationwide, renewable natural gas has grown over 70% annually in recent years - facilitated by inclusion in the EPA's Renewable Fuel Standard (RFS2), which sets a minimum volume for the amount of renewable fuel that must be used in the transportation sector. Renewable natural gas production for transportation totaled 151 million gasoline gallon equivalents in 2017, up from 125M GGEs in 2016 and 90M GGEs in 2015.

R-CNG derived from organic waste is chemically similar to geologic compressed natural gas (CNG), and can be used in the same applications - heating or cooling buildings, generating electricity, or fueling vehicles.

But unlike fossil CNG, it's a fully renewable fuel. According to Argonne National Labs GREET model, R-CNG produced from anaerobic digestion of food waste is net-carbon negative over its lifecycle, including production, use and avoided emissions. That means making and using it actually results in lower atmospheric GHG than if the fuel were never made or used. R-CNG derived from a food waste digester meets or exceeds international goals of reducing GHG emissions 80% from 2005 levels by 2050.

"R-CNG can achieve the greatest GHG reductions of any transportation fuel today - 70% or more as compared to gasoline or diesel," said Marianne Mintz of Argonne National Laboratory's Energy System Division, who co-authored the case studies.

R-CNG also saves on fuel costs and allows truck and bus fleets to operate more quietly and efficiently, generating fewer pollutants that threaten public health. Compared to diesel, it reduces carbon monoxide up to 70%, nitrous oxide up to 87%, and particulate matter up to 90%, as well as reducing noise up to 90%.

Fair Oaks Farm's digester generates enough R-CNG to displace some 1.5 million gallons of diesel, and to cut annual GHG emissions by 19,500 tons CO2e. That's a 43% reduction in carbon emissions per gallon of milk, a selling point that helped the company negotiate an exclusive supply agreement with the national grocery chain Kroger.

The Sacramento BioDigester was built by a public-private partnership in 2013. Atlas Disposal and other haulers collect the organic wastes from area businesses and deliver it to the digester, which produces enough R-CNG to displace 500,000 gallons of diesel a year and divert up to 40,000 tons of organic waste from landfills. Atlas's subsidiary ReFuel Energy Partners uses that R-CNG to power its 30 natural gas powered refuse trucks. Atlas is committed to converting its entire refuse fleet to natural gas as older diesel vehicles retire.

White House report warns of fallout from grid cyberattack

Blake Sobczak, E&E News reporter

Energywire: Tuesday, February 20, 2018

A cyberattack on the power grid could erode trust in key U.S. institutions and cause billions of dollars in damage, a top White House advisory group said Friday.

The Council of Economic Advisers said that "insufficient" investment in security across critical infrastructure sectors exacerbates the risk posed by hackers. But the council's report concluded that private firms are still in the best position to ward off a devastating cyberattack.

"An attack launched against the electric grid could affect large swaths of the U.S. economy because most economic activity is dependent on access to electricity," the report said, citing a 2015 study by the Lloyd's insurance market operator and the University of Cambridge's Centre for Risk Studies that found a worst-case cyber event could cost the U.S. economy $1 trillion (Energywire, July 9, 2015).

Actual cybersecurity losses have fallen well short of that mark. The White House advisory group estimated that "malicious cyber activity" cost between $57 billion and $109 billion in 2016. The report also notes, however, that those figures do not reflect the potential of "a devastating cyberattack that would ripple through the entire economy."

The chances of such a nightmarish scenario are remote, said independent grid security consultant Tom Alrich. Large power utilities face binding physical and cybersecurity standards set through the Federal Energy Regulatory Commission and the North American Electric Reliability Corp., and smaller energy firms must report to increasingly cyber-aware state regulators.

"You're talking about a very small probability of a catastrophic event, so you have to protect against it," Alrich noted. "There will definitely be a big grid event at some point, and you've got to think about how you're going to recover from it."

Alrich said setting up self-sustaining microgrids could make it much harder for attackers, or storms or solar weather, to bring down large swaths of the grid. But ultimately President Trump and administration officials will have to chart their own course for dealing with dire grid scenarios, he said.

"If they focus too much on cybersecurity, then the implication is that we ought to throw everything we can at cybersecurity because terrible things could happen," he said. "You've got to think about [grid] resilience itself, not just the titular causes."

The Trump administration has shown a keen interest in locking down the power grid against hackers. Trump's 2019 budget request set aside millions of dollars in new funds for grid network defenses, including $96 million for a stand-alone cybersecurity office at the Department of Energy (Energywire, Feb. 15).

In May 2017, Trump issued an executive order on cybersecurity that called for a checkup on the power grid's resilience to an attack. That assessment, carried out by DOE and the Department of Homeland Security, was never made public, though the Council of Economic Advisers' report cites the study in a footnote while discussing grid cyber risks to U.S. military operations.

"It is estimated that a loss of power would impact the [Department of Defense] missions of preventing terrorism and enhancing security, safeguarding and securing cyberspace, and strengthening national preparedness," the council said, referencing the "Assessment of Electricity Disruption Incident Response Capabilities." "If power outages affected missions both at home and abroad, United States security would be significantly impacted."

Trump ice cubes used to fight globalwarming

A humorous article regardless of your politics.

https://www.stuff.co.nz/life-style/food-wine/food-news/101664851/kiwi-creatives-create-donald-trump-ice-cubes-to-help-fight-global-warming

Energy storage leap could extend range and slash electric car charging times

Adam Vaughan Energy correspondent

Mon 26 Feb 2018 12.32 EST Last modified on Mon 26 Feb 2018 12.55 EST

It takes about eight hours to charge the lithium-ion batteries in electric vehicles. Photograph: Christopher Thomond for the Guardian

Researchers have claimed a breakthrough in energy storage technology that could enable electric cars to be driven as far as petrol and diesel vehicles, and recharge in minutes rather than hours.

Teams from Bristol University and Surrey University developed a next-generation material for supercapacitors, which store electric charge and can be replenished faster than normal batteries.

This could allow cars to recharge in 10 minutes, rather than the eight hours it can take to replenish the lithium-ion batteries in current electric vehicles.

The technology has sufficient energy density to comfortably surpass the 200 to 350-mile ranges of leading battery-powered cars such as Teslas, according to its backers.

Dr Donald Highgate, the director of research at Superdielectrics – a company that worked with the universities on the research, said: “It could have a seismic effect on energy, but it’s not a done deal.”

Supercapacitors have existed for decades and can store and release power rapidly. Tesla’s Elon Musk has said a breakthrough in transportation is more likely to come from supercaps than batteries.

Superdielectrics was originally developing a polymer that could be transparent and hold electronic circuits for potential use in Google Glass-style applications.

But after realising the energy storage capabilities of the material, it changed tack in 2014 and has produced 10cm² demonstrations that can power a tiny fan or LED bulb.

There are drawbacks to the technology, however. If you left a supercap car for a month at an airport car park, it would have lost much of its charge by the time you returned, the researchers admitted. For this reason, they expect the first such cars to also have a small conventional battery.

The Bristol-Surrey teams believe the polymer they are using could be more energy-dense than lithium ion, holding 180 watt-hours per kilogram compared with 100W⋅h/kg-120W⋅h/kg for commercial lithium ion.

Dr Thomas Miller, an expert on supercapacitors at University College London, who was not involved in the work, said the technology would have to scale up to compete. “If a significant leap has been made in energy density, it would be an important achievement,” he said.

“One major consideration that is yet to be proven is the scalability, cost and sustainability of the new technology.”

Highgate said he was confident that prototype production of his supercaps could be under way within two years, initially for specialist use, such as by the military.

Europe Takes First Steps in Electrifying World’s Shipping Fleets

Container ships, tankers, freighters, and cruise liners are a significant source of CO2 emissions and other pollutants. Led by Norway, Europe is beginning to electrify its coastal vessels – but the task of greening the high seas fleet is far more daunting.

By Paul Hockenos • February 22, 2018

Since early 2015, a mid-sized car ferry, the MS Ampere, has been traversing the Sognefjord in western Norway from early morning to evening, seven days a week — without a whiff of smokestack exhaust or a decibel of engine roar. The 260-foot Ampere, which carries 120 cars and 360 passengers, is the one of world’s first modern, electric-powered commercial ships, with battery and motor technology almost identical to today’s plug-in electric cars, only on a much larger scale.

Norway’s long and jagged Atlantic coastline — with thousands of islands and deep inland fjords — made the Norwegians a seafaring people long ago, and even today ferry travel is the fastest way to reach many destinations. Given this geography and the country’s abundant hydroelectric resources, it’s hardly surprising that the Norwegians have plunged ahead in the development of electric shipping, beginning with light, short-range ferries.

Currently, Norway has just two fully operational electric-powered ferries. But another 10 will be christened this year, 60 by 2021, and by 2023 the country’s entire ferry fleet will either be all-electric or, for the longer routes, equipped with hybrid technology, experts say. Moreover, Norway’s top cruise ship operator will soon launch two expedition cruise liners with hybrid propulsion that are designed to sail the Arctic. Several Norwegian companies have teamed up to construct a coastal, all-electric container ship that could eliminate 40,000 diesel truck trips annually. Eidesvik Offshore, a firm supplying offshore oil rigs, has converted a supply vessel to operate on batteries, diesel, and liquefied natural gas.

Norway is already a global leader in the adoption of electric vehicles, spurred in large measure by the hydropower that provides 98 percent of the country’ electricity. So moving into the forefront of tackling a major global environmental challenge — decarbonizing the world’s shipping fleet — was a natural step for the country. Other nations — including Finland, the Netherlands, China, Denmark, and Sweden — also are beginning to launch electric ships. Last year, China, for example, commissioned a 230-foot all-electric cargo ship, one that, ironically, transports coal along the Pearl River.

But if the electrification of the world’s automobile and truck fleet represents a daunting challenge, then converting the global shipping fleet from heavily polluting fuel oil and diesel to renewable sources of energy is no less complex. It’s one thing to use electricity and lithium ion batteries to power a car ferry across a Norwegian fjord, with charging stations at both ends of the run. It’s quite another to power the more than 50,000 tankers, freighters, and cargo carriers in the world’s merchant fleets across oceans. International shipping now accounts for about 3 percent of global carbon dioxide emissions, and this could shoot up dramatically to 17 percent by 2050 if other sectors decarbonize while shipping emissions climb higher, as they have unremittingly in recent years. The booming cruise ship industry has become a significant problem recently, emitting large quantities of carbon dioxide and sulfur dioxide, among other pollutants, according to the German environmental group, NABU.

Converting the world’s shipping industry to run on renewable energy remains a longer-term goal that experts say will require development of more sophisticated battery technology and a new regulatory framework. “A lot of energy is needed to propel ships,” argues Olaf Merk of the International Transport Forum, a think tank for transport policy that is part of the Organization for Economic Cooperation and Development. “Electric ships are becoming attractive options for ships sailing short distances. But longer distances would require huge battery packs. This wouldn’t be attractive at the moment because of its high costs.”

“Countries with huge fleets are obstructing changes that would drive forward the electrification of marine transport,” says one expert.

Yet many analysts say that even though the technology to power large, ocean-going vessels on electricity is not yet ripe, the shipping industry’s conservative mindset is also a major impediment to the sector’s transformation. “The industry doesn’t really believe that a switch from bunker fuels is possible,” says Faig Abbasov, a shipping expert with Transport & Environment — a Brussels-based international environmental organization — referring to the fuel oils used to power ships. “And it’s countries with huge fleets that are obstructing changes that would drive forward the electrification of marine transport.”

Abbasov says the sector would change much more quickly if ship fuels were taxed — which they currently are not – and electricity for powering ships wasn’t taxed, as is currently the case across Europe. “This means that ship owners sticking with the dirtiest fuels are given a free ride,” he says.

Despite these challenges, Norway is steadily making progress toward converting its shipping fleet to run on renewable energy. “It’s really impressive — the transformation of shipping is beginning right now, it’s happening very fast, and not just in Norway,” says Borghild Tønnessen-Krokan, director of Forum for Development and Environment, an independent Norwegian NGO that has for years pushed for low-carbon transportation. “Shipping is part of a bigger green revolution in transportation in Norway,” he added, noting that more than half of all Norwegian cars sold last year were hybrid or electric.

The flurry of activity in electric shipping may begin to address the glaring omission of shipping in the Paris Climate Accord, which did not cover maritime transport. Shipping industry lobbyists and nations such as China and Brazil aggressively fought the inclusion of ship emissions in the accord, claiming that such a truly international sector couldn’t be held responsible for emissions in the same way that countries are. The EU and the International Maritime Organization (IMO) have set up monitoring criteria and energy efficiency standards that will become more stringent over time, but the IMO, at the behest of the industry and high-profile shipping countries, has resisted meaningful and binding emissions reduction goals for shipping companies. The EU has pushed back, threatening that it will include the sector’s CO2 pollution in its emissions-trading scheme if the IMO fails to take significant action.

An all-electric, driverless canal barge by the Dutch company Port-Liner is expected to start navigating between Amsterdam, Antwerp, and Rotterdam in the summer of 2018.

Fighting climate change, however, was only one motive that inspired the niche shipyard Fjellstrand; Corvus Energy, a Canadian energy storage firm; and Norled AS, a ferry operator, to join forces with international heavyweight Siemens AS, Europe’s largest industrial manufacturer, to get Norway’s novel electric ferry pilot project off the ground.

The historic Fjellstrand shipyard, nestled on the shores of the Hardangerfjord in southern Norway, had been toying with the idea of battery-powered ferries for years. Fjellstrand surmised that since the country’s electricity supply is almost all renewably generated, thanks to its abundant rivers and mountain lakes, the economics of electric propulsion could eventually undercut the cost and large quantities of fuel that sea travel requires.

European engineers had tinkered with electric ships for over a century, but after a heyday in the early 1900s, electric motors lost out to the internal combustion engine in the 1920s. Norwegian submarines have long relied on hybrid diesel-electric locomotion, and Fjellstrand commissioned a series of viability studies for ferries.

But a decade ago, the technology, particularly the batteries, was simply too primitive and costly. “Our first studies found that we needed 450 tons of batteries to make it work,” explains Edmund Tolo of Fjellstrand. “In terms of size, ferries are of an entirely different dimension than cars.” It wasn’t until the company switched from designs using lead-acid batteries to more sophisticated lithium-ion batteries, and won a tender from Norway’s transportation authority in 2010, that the electric ferry project began to take off.

A new electric ferry reduced CO2 emissions by 95 percent and operating costs by 80 percent.

The biggest hurdle was no longer battery size. Storage technology had improved dramatically, and a medium-sized ferry’s engine room, which is large enough to accommodate roughly 12 tons of engine, can hold a lot of batteries. But the issue was how, in terms of time and magnitude, to deliver the massive kilowatt charge — about one hundred times that required for a plug-in automobile — required by even a moderately sized vessel to cross the Sogne Fjord, a distance of 3.5 miles.

Fjellstrand’s solution was to have battery packages and heavy-duty charging stations on both shores, at the port towns of Oppedal and Lavik, as well as in the MS Ampere itself. The shore-based batteries would be charged mid-journey, enabling the power to be transferred to the boat’s batteries while it docks, and in just 10 minutes. Many other Norwegian ports already have shore power stations, an accessory instrumental for its offshore oil industry.

“The other big issue was safety,” says Tolo. The thermal reaction that occurs in lithium-ion batteries generates intense heat that can lead to explosions and fires. The Ampere’s team had to design a unique cooling system for the twelve-ton packs of lithium batteries.

Industry studies have underscored the Ampere’s benefits and its technology’s maturity, showing that electric propulsion reduced CO2 emissions by 95 percent and operating costs by 80 percent.

“The technology is there, but to make it happen there have to be sticks as well as carrots,” says one expert.

“The energy costs are lower than diesel,” says Jan Kjetil Paulsen, a shipping expert with Bellona, an international environmental NGO based in Oslo, Norway. “Maintenance costs are less, too, as the electric motor is less complex than the diesel engine. And an electric motor lasts three times longer than a typical internal combustion system.”

Elsewhere in Europe, the electrification of maritime travel is gradually beginning to take off. Late last year, Finland launched its first electric car ferry, and dozens of hybrid ferries and electric-powered ferries are scheduled to go into service in the coming years. Finland’s cutting-edge maritime research vessel, the Aranda, has switched to hybrid propulsion. The ship, which belongs to the Finnish Environment Institute (SYKE), has benefited in more ways than one from adding an electric power system. The Aranda is equipped with a front-end ice-cutter that slashes through Arctic ice fields en route to monitoring stations and other winter research locations. Since the electric motor has higher rotational force than the diesel motor, it is significantly more effective at powering the cutter to break up thick ice cover, says Jukka Pajala, a senior adviser at SYKE. Moreover, the electric motor doesn’t expel pollutants that exacerbate the ice melt caused by global warming. And the electric motor is virtually silent, a critical advantage for researching marine life.

Denmark and Sweden are cooperating on two large eight-ton, electric passenger ferries that will travel the seven miles between Helsingborg, Sweden and Helsingör, Denmark. This summer, the Dutch company Port-Liner will unveil five all-electric, driverless, emissions-free barges, dubbed the “Tesla ships,” that will navigate the canals linking the ports of Amsterdam, Antwerp, and Rotterdam. The EU supported the 100 million-euro project with 8.5 million euros.

Despite these signs of progress, serious national and international regulations and incentives on converting the shipping industry to renewable sources of energy must be enacted, including a ban on heavy fuel oil in the Arctic to reduce the emissions of sooty, heat-absorbing pollution particles. “The technology is there,” says Tønnessen-Krokan of Norway’s Forum for Development and Environment. “Incentives have worked to make it happen, but there have to be sticks as well as carrots. Shipping should have been subject to emissions targets years ago.”

Paul Hockenos is a Berlin-based writer whose work has appeared in the The Nation, Foreign Policy, New York Times, Chronicle of Higher Education, The Atlantic and elsewhere. He has authored several books on European affairs, most recently Berlin Calling: A Story of Anarchy, Music, the Wall and the Birth of the New Berlin. He was a fellow at the American Academy in Berlin.

Measuring the Risks of Tidal and Wave Power

Researchers are investigating the possibility of environmental damage before the industry kicks off.

Authored by Ramin Skibba

February 22nd, 2018

As the world seeks to cut its reliance on fossil fuels, scientists have been working to harness the forces of nature—from the sun and the wind to the waves and the tides—to produce reliable sources of renewable power. But just like the energy sources they seek to replace, such as carbon-spewing oil and coal, these new sources of green energy will inevitably cause some environmental damage.

Skeptics and scientists have raised a range of hypothetical ways in which wave and tidal power infrastructure could hurt animals. Maybe seals, seabirds, and fish will be sliced and diced by underwater turbines. Perhaps they will be disturbed by the sounds of underwater generators. Sharks and rays—sensitive to electromagnetic fields—might be thrown astray by subsea power cables. Seawater and sediment could be churned up, disrupting migrating and feeding animals. As with any new technology, it’s difficult to accurately gauge the actual threat posed by any of these imagined scenarios.

That’s why Andrea Copping, an offshore energy expert at the US Department of Energy-funded Pacific Northwest National Laboratory, set out to assess the risks posed by common forms of ocean energy infrastructure. She finds that tidal turbine blades present the most immediate danger to wildlife, but impacts would be rare, and in most cases non-lethal.

Through laboratory experiments and modeling scenarios, Copping and her team analyzed which of the proposed threats might pose a danger to ocean ecosystems. Crucially, they also sought to put these dangers in context, comparing them against the better-known dangers posed by offshore wind farms, oil platforms, and ships. The team presented their preliminary findings at a scientific conference in Portland, Oregon, earlier this month.

In one experiment, Copping and her team used blubber and skin from dead killer whales and harbor seals to simulate the effects of the animals being hit by turbine blades. They also modeled the odds that a harbor seal would interact with such devices in a way that it would get hurt.

Ultimately, they found that the threat of most planned wave and tidal energy setups is quite low—especially when compared to other coastal energy projects, such as offshore oil drilling.

An animal hit by turbine blades might be bruised, Copping says, but the damage is unlikely to be fatal. The wave energy industry has not settled on a favored design, but most prototypes involve fewer moving parts than tidal turbines. All marine energy systems have power cables that could potentially affect sharks’ and rays’ sense of electromagnetic perception, Copping says, but only in the cables’ immediate vicinity.

The findings echo previous research says Beth Scott, a zoologist at the University of Aberdeen in Scotland. Her colleagues have used dead seals to test the damage turbines can cause, and except for at the blades’ tips, they usually move too slowly to be lethal to marine mammals, though injuring even a single endangered animal, such as a killer whale, would be a major problem.

People tracking tagged seals near an experimental wave energy project off Scotland have not reported any injured or killed animals, Scott adds.

But there is still the potential for chronic and cumulative damage caused by noisy machinery and churning water, says Sarah Henkel, an ecologist at Oregon State University. Slightly but repeatedly modifying the flow of water could disrupt the migration and foraging of some fish species, she says, while the circulation of sediment could affect seafloor animals.

Ocean power infrastructure will also require construction and maintenance, and will likely drive increased boat traffic, Henkel adds, which could disrupt marine wildlife more than the devices themselves.

But Copping notes that wave and tidal energy systems work best in high energy seas, with water that is rough and hard to work in. “No one spends longer at sea with vessels in these harsh parts of the ocean than can be helped,” she says.

The industry has been moving slowly, making it difficult to predict how problematic such long-term issues might be. Henkel says she and other researchers are monitoring ongoing trials, and watching for any potential environmental effects.

In any case, Copping thinks wave and tidal energy have an important role to play. “I think we need to take a varied energy portfolio approach, and I think marine energy has a real place in that,” she says.

“We’re trying to learn more, and we’re trying to convince regulators and stakeholders that it’s probably not the risk they imagine.”

NASA Study Brings Antarctic Ice Loss Into Sharper Focus

by Pat Brennan

Pasadena CA (JPL) Feb 21, 2018

The flow of Antarctic ice, derived from feature tracking of Landsat imagery.

A NASA study based on an innovative technique for crunching torrents of satellite data provides the clearest picture yet of changes in Antarctic ice flow into the ocean. The findings confirm accelerating ice losses from the West Antarctic Ice Sheet and reveal surprisingly steady rates of flow from its much larger neighbor to the east.

The computer-vision technique crunched data from hundreds of thousands of NASA-U.S. Geological Survey Landsat satellite images to produce a high-precision picture of changes in ice-sheet motion.

The new work provides a baseline for future measurement of Antarctic ice changes and can be used to validate numerical ice sheet models that are necessary to make projections of sea level. It also opens the door to faster processing of massive amounts of data.

"We're entering a new age," said the study's lead author, cryospheric researcher Alex Gardner of NASA's Jet Propulsion Laboratory in Pasadena, California.

"When I began working on this project three years ago, there was a single map of ice sheet flow that was made using data collected over 10 years, and it was revolutionary when it was published back in 2011.

"Now we can map ice flow over nearly the entire continent, every year. With these new data, we can begin to unravel the mechanisms by which the ice flow is speeding up or slowing down in response to changing environmental conditions."

The innovative approach by Gardner and his international team of scientists largely confirms earlier findings, though with a few unexpected twists.

Among the most significant: a previously unmeasured acceleration of glacier flow into Antarctica's Getz Ice Shelf, on the southwestern part of the continent - likely a result of ice-shelf thinning.

Speeding up in the west, steady flow in the east

The research, published in the journal "The Cryosphere," also identified the fastest speed-up of Antarctic glaciers during the seven-year study period. The glaciers feeding Marguerite Bay, on the western Antarctic Peninsula, increased their rate of flow by 1,300 to 2,600 feet (400 to 800 meters) per year, probably in response to ocean warming.

Perhaps the research team's biggest discovery, however, was the steady flow of the East Antarctic Ice Sheet. During the study period, from 2008 to 2015, the sheet had essentially no change in its rate of ice discharge - ice flow into the ocean. While previous research inferred a high level of stability for the ice sheet based on measurements of volume and gravitational change, the lack of any significant change in ice discharge had never been measured directly.

The study also confirmed that the flow of West Antarctica's Thwaites and Pine Island glaciers into the ocean continues to accelerate, though the rate of acceleration is slowing.

In all, the study found an overall ice discharge for the Antarctic continent of 1,929 gigatons per year in 2015, with an uncertainty of plus or minus 40 gigatons. That represents an increase of 36 gigatons per year, plus or minus 15, since 2008. A gigaton is one billion tons.

The study found that ice flow from West Antarctica - the Amundsen Sea sector, the Getz Ice Shelf and Marguerite Bay on the western Antarctic Peninsula - accounted for 89 percent of the increase.

The science team developed software that processed hundreds of thousands of pairs of images of Antarctic glacier movement from Landsats 7 and 8, captured from 2013 to 2015.

These were compared to earlier radar satellite measurements of ice flow to reveal changes since 2008.

"We're applying computer vision techniques that allow us to rapidly search for matching features between two images, revealing complex patterns of surface motion," Gardner said.

Instead of researchers comparing small sets of very high-quality images from a limited region to look for subtle changes, the novelty of the new software is that it can track features across hundreds of thousands of images per year - even those of varying quality or obscured by clouds - over an entire continent.

"We can now automatically generate maps of ice flow annually - a whole year - to see what the whole continent is doing," Gardner said.

The new Antarctic baseline should help ice sheet modelers better estimate the continent's contribution to future sea level rise.

"We'll be able to use this information to target field campaigns, and understand the processes causing these changes," Gardner said.

"Over the next decade, all this is going to lead to rapid improvement in our knowledge of how ice sheets respond to changes in ocean and atmospheric conditions, knowledge that will ultimately help to inform projections of sea level change."

Expect seas to rise for the next 300 years, new climate models warn

by Brooks Hays

Washington (UPI) Feb 22, 2018

Even if carbon emissions are curbed and rising temperatures are constrained, many scientists expect sea level rise to continue for some time. New research suggests sea level rise could last 300 years.

For some time, climate scientists have argued that some of global warming's impacts have already been baked into the planet's systems. Even if global temperatures stopped rising tomorrow, researchers contend, Earth's ice sheets have already been destabilized, leading to continued melting and sea level rise.

But while the latest efforts of scientists at the Potsdam Institute for Climate Impact Research lend credence to this contention, their findings point to the need for stronger emissions controls.

"Man-made climate change has already pre-programmed a certain amount of sea-level rise for the coming centuries, so for some it might seem that our present actions might not make such a big difference -- but our study illustrates how wrong this perception is," Potsdam researcher Matthias Mengel said in a news release.

The goal of the international Paris agreement is to reach the peak in global emissions as soon as possible -- a peak followed by an aggressive decline. The latest research, published this week in the journal Nature Communications, suggests the sooner the peak, the better.

"Every delay in peaking emissions by five years between 2020 and 2035 could mean additional 20 cm of sea-level rise in the end -- which is the same amount the world's coasts have experienced since the beginning of the pre-industrial era," Mengel said.

Mengel and his colleagues used several of the most advanced climate and sea level rise models to simulate a range of scenarios. All of the simulations operated under the assumption that global emissions restraints will limit warming to 2 degrees Celsius -- the aim of the Paris agreement. Each scenario differed, however, in the speed at which the goal is achieved.

The models suggests the planet will experience between 2.3 and 4 feet of sea level rise by the year 2300, but that achieving an emissions apex sooner rather than later will keep sea level rise at the lower end of the spectrum.

Per usual, uncertainty remains. Scientists have struggled to improve the precision of their sea level rise predictions as a result of the complexities of the Antarctic ice sheet.

"The uncertainty of future sea-level rise is at present dominated by the response of Antarctica. With present knowledge on ice sheet instability, large ice loss from Antarctica seems possible even under modest warming in line with the Paris agreement," said Mengel.

And while Mengel's simulations assume the goals of the Paris agreement are met, many climate scientists aren't so confident. Critics of the agreement contend a stronger global commitment to rein in emissions is necessary to meet the warming target of 2 degrees Celsius.

Taiwan to ban disposable plastic items by 2030

by Staff Writers

Taipei (AFP) Feb 22, 2018

Taiwan is planning a blanket ban on single-use plastic items including straws, cups and shopping bags by 2030, officials said Thursday, with restaurants facing new restrictions from next year.

It is the latest push by Taiwan to cut waste and pollution after introducing a recycling programme and charges for plastic bags.

The island's eco-drive has also extended to limiting the use of incense at temples and festivals to protect public health.

Its new plan will force major chain restaurants to stop providing plastic straws for in-store use from 2019, a requirement that will expand to all dining outlets in 2020.

Consumers will have to pay extra for all straws, plastic shopping bags, disposable utensils and beverage cups from 2025, ahead of a full ban on the single-use items five years later, according to the road map from the government's Environmental Protection Administration (EPA).

"We aim to implement a blanket ban by 2030 to significantly reduce plastic waste that pollutes the ocean and also gets into the food chain to affect human health," said Lai Ying-ying, an EPA official supervising the new programme.

According to Lai, a Taiwanese person on average uses 700 plastic bags annually. The EPA aims to reduce the number to 100 by 2025 and to zero by 2030.

The government has already banned free plastic shopping bags in major retail outlets including supermarkets and convenience stores, expanding the move to smaller businesses including bakeries and drinks kiosks from this year.

The island started recycling plastic and pushing to reduce single-use plastic items more than a decade ago.

Last year, nearly 200,000 tonnes of plastic containers were recycled, the EPA said

Environmentalists lost big on LNG exports. Now what?

Ellen M. Gilmer, E&E News reporter

Energywire: Wednesday, February 21, 2018

A federal court has upheld approvals of a series of liquefied natural gas export projects, including the Freeport LNG terminal in Texas. Freeport LNG Development LP

After years of fever-pitched opposition to liquefied natural gas exports, the Sierra Club quietly pulled its last LNG lawsuit from federal court last month.

The withdrawal followed a series of losses in similar cases, a subdued end to environmentalists' chief legal campaign against shipping U.S. natural gas around the world. The Sierra Club considers LNG exports too risky for the climate, but federal judges repeatedly found that the government had sufficiently considered impacts.

Many in the natural gas industry are now breathing a sigh of relief, satisfied that the U.S. Court of Appeals for the District of Columbia Circuit spoke clearly on the climate issue and glad that the Sierra Club got the message.

But while environmentalists may have abandoned the specific approach used in the failed LNG challenges, they're positioning themselves for a second wave of combat that could prove every bit as bothersome to the industry and the Trump administration.

Put simply: "Sierra Club has not ceded the climate fight on LNG exports," said attorney Nathan Matthews, who led the group's LNG cases.

The next round of challenges to exports is bubbling up slowly — in comments on rulemaking dockets, grass-roots organizing in Texas and administrative protests in Oregon.

In each case, the Sierra Club or other groups could wind up back in the courtroom, opening a new chapter in the long legal debate and sparking fresh uncertainty for export projects.

Environmentalists' string of legal defeats on LNG exports began in mid-2016 and piled up quickly from there.

The D.C. Circuit that summer rejected a trio of challenges to the Federal Energy Regulatory Commission's approval of various gas facilities, finding that FERC had no obligation to consider broad climate impacts of exports when the Department of Energy was the agency with control over whether to actually greenlight those shipments.

The Sierra Club duly shifted its focus to DOE, lodging five challenges that asked the court to force the agency to take a closer look at indirect effects of exports, including increased production and use of natural gas.

Last August, the court gave a firm answer: no.

In a case focused on an export application for the Freeport LNG facility in Texas, the D.C. Circuit ruled that DOE had adequately considered climate concerns in various review documents, including a broad life-cycle analysis of LNG's greenhouse gas emissions.

The court said the agency was not required to tailor the life-cycle review — which DOE completed in 2014 and relied upon to approve several other projects — to any particular volume of exports.

A few months later, three other cases fell like dominoes. The D.C. Circuit didn't even bother hearing oral argument on the challenges to exports from Corpus Christi, Texas; Sabine Pass, La.; or Cove Point, Md. It rejected all three in an unpublished four-page judgment that said the Freeport decision had already answered the primary legal questions.

The court had no appetite for environmentalists' continued argument that DOE's analysis of potential climate impacts fell short of its obligations under the National Environmental Policy Act and the Natural Gas Act, which requires the agency to make a public interest determination for exports to countries lacking a free-trade agreement with the United States.

Jessica Wentz, an attorney for Columbia Law School's Sabin Center for Climate Change Law, said the decisions give DOE broad legal cover for future export approvals.

"We have the court basically saying DOE's analysis was sufficient because DOE had a report that it appended to the analysis saying, 'Here's the life-cycle greenhouse gas emissions for a particular unit of natural gas as compared to coal and oil,' and that was sufficient for the review," she said. "That same report, it's relatively easy for DOE to just append that to any review it does for an LNG export terminal, at which point that satisfies the D.C. Circuit standard."

Indeed, DOE has relied on the 2014 life-cycle study for all of its export approvals since then and has not signaled any plans to change its practice.

Some LNG boosters say that means it's game over for the Sierra Club's climate challenges.

"I am hard-pressed from where I sit and my vantage point to see how there might be additional litigation from an environmental review standpoint on these facilities," said Charlie Riedl, executive director of the Center for Liquefied Natural Gas. "I think the precedent is there and set, and seeing the most recent withdrawal from Sierra Club I think is a pretty good indicator that they probably see it the same way."

Hunton & Williams LLP attorney Eric Hutchins agreed, saying the environmental group's January decision to pull the remaining lawsuit against DOE signaled that subsequent export approvals will likely be in the clear, at least for now.

"Sierra Club wouldn't withdraw litigation if it had options left to challenge DOE's export authorization analysis," he said. "A significant change in the export landscape could require DOE to revisit that analysis, but in the near term, we can expect more export projects to move forward."

So was the Sierra Club's strategy a miscalculation? That's a tough question for Matthews, who spent years working on the litigation.

"Obviously we remain convinced that we were right in those lawsuits, but maybe we would have had additional success had we done a different approach," he said.

Then again, he added, the group was and is certain that DOE's approach to climate analysis represented a systemic problem, so it's hard to imagine choosing any other plan of attack at the outset.

For example, the Sierra Club could have focused its briefs on site-specific impacts of LNG terminals and exports — issues it outlined in earlier legal documents — but, Matthews said, not "at the expense of our arguments about the systemic challenge."

"You only get 13,000 words in a brief, and that may sound like a lot ... but it is not enough to cover everything that we think is wrong with one of these proposals," he said. "So it's always a pick-your-battles, even though there's a much bigger picture in the background of every one of these fights."

Plus, he said, even without a court victory, the Sierra Club's advocacy at the FERC stage may have nudged DOE toward performing its life-cycle LNG study and additional environmental analysis in the first place.

"And although we remain convinced that those DOE materials fall far short of what NEPA requires when it comes to reviewing an individual proposal, it's still a lot better than what we were getting before," Matthews said.

Former Sierra Club climate attorney David Bookbinder also defended the group's tactics.

"They didn't put their eggs in one basket; they didn't even put their eggs in one legal basket," said Bookbinder, now chief counsel for the libertarian Niskanen Center. "That particular basket just didn't work out. I don't see any grounds to say, 'You shouldn't have done those cases.'

"They ran into some unfortunate decisions," he added, "and they're saying, 'OK, that's a dry hole; we're going to go drill elsewhere,' so to speak — to borrow a metaphor from the other side."

It's not yet clear what other legal strategies might be most fruitful, but environmental lawyers are busy trying them out for the next wave of proposals.

In Oregon, for example, the Sierra Club has teamed up with regional environmental groups fighting the proposed Jordan Cove LNG terminal and a related pipeline.

LNG lawsuits at the D.C. Circuit

JUNE 2016

D.C. Circuit rejects two environmental challenges to FERC authorization of LNG terminals at Sabine Pass, La., and Freeport, Texas.

JULY 2016

D.C. Circuit rejects environmental challenge to FERC authorization of LNG terminal at Cove Point, Md.

AUGUST 2017

D.C. Circuit rejects environmental challenge to DOE authorization of LNG exports from Freeport.

NOVEMBER 2017

D.C. Circuit rejects three environmental challenges to DOE authorizations of LNG exports from Cove Point, Sabine Pass and Corpus Christi, Texas.

JANUARY 2018

Sierra Club withdraws remaining challenge to separate DOE authorization of LNG exports from Sabine Pass.

The battle is still at the FERC stage: The commission in 2016 rejected developers' initial proposal, citing a lack of demonstrated market demand for the pipeline that would feed Jordan Cove, and Canadian backer Veresen Inc. reapplied last year.

In a formal protest filed in October, the Sierra Club and other groups focused on project needs, effects on landowners and impacts to jobs in the timber, fishing and tourism industries. Separate docket filings from the groups lay out the environmental case for FERC to look closely at site-specific impacts and regional effects of climate change.

If FERC approves the terminal, the groups will likely head to court. And if DOE then approves the actual exports, they'll oppose that, too.

Wentz of Columbia noted that many environmental groups are also zooming out from LNG exports and looking at the broader natural gas infrastructure network.

Indeed, they've gotten some more traction in their opposition to gas pipelines over the past year — most notably with a D.C. Circuit ruling that FERC failed to adequately consider indirect climate impacts of a pipeline project in the Southeast, which could be shut down as a result.

Activists' idea is to starve the system. The more expansive the infrastructure is, the easier it is to build LNG terminals; stopping the spread of pipelines dries up fuel sources for exports.

The Chesapeake Climate Action Network, one of the fiercest opponents of Dominion Energy Inc.'s Cove Point LNG export facility in Maryland, is at the forefront of that battle.

"Our goal all along was to prevent or keep at a low, low minimum the amount of gas that Cove Point ever exports," CCAN Director Mike Tidwell said. "The fight continues to adhere to that original goal, and that is to make sure that the export facility at Cove Point is a failed enterprise. We tried to trigger the failure through the regulatory process to keep it from being built and the legal process, and now we're on to the longer-term, multipronged movement."

Tidwell noted that the group is actively opposing multiple gas pipelines in the Mid-Atlantic region and successfully pushed for a ban on hydraulic fracturing in Maryland. CCAN is also supporting state-level legislation that would tackle methane leakage from existing pipelines.

It's a roundabout way of fighting LNG exports, but the approach more directly addresses environmentalists' complaints about the upstream climate impacts. Critics concerned about indirect climate effects of exports generally focus on two areas: emissions of methane from increased gas production to feed facilities, and emissions of carbon dioxide and other greenhouse gases from the ultimate combustion of natural gas for energy.

And the pipeline fight isn't just about the climate. Instead, groups across the country are layering on concerns about localized impacts, environmental justice and property rights.

"Whereas for a few years, our LNG work was really picking climate as the field to fight on, we're now having a much broader approach in challenging gas infrastructure," Matthews said.

LNG exports may land squarely before the D.C. Circuit again, too.

While some in the industry have argued that the door is closed for environmental groups to challenge DOE's climate analysis, the Sierra Club still sees windows for legal action.

Matthews noted that any challenge to LNG export approvals would be based on the specific record DOE had before it at the time of a decision, meaning that improved attribution tools or evolving research modeling climate impacts and gas markets could expand the agency's analysis obligations.

"So clearly it's possible that we could go back to the Department of Energy and say, 'The D.C. Circuit upheld what you did before, but now you can do better and there's no excuse not to,'" Matthews said.

"That's one truism of how agency litigation goes," he said. "It's always based on, 'Did the agency do an acceptable job based on what it had before it?' And science and other tools are always advancing."

That makes DOE's LNG life-cycle analysis a clear target, said Wentz. Export opponents could, for instance, try to poke holes in the study's conclusions or assumptions by gathering persuasive research that U.S. LNG actually displaces renewable energy elsewhere, undermining climate benefits from the cleaner-burning nature of gas over coal.

New, higher estimates of the level of greenhouse gas emissions associated with the combustion of fuels would also be useful for environmentalists challenging future projects, she said.

"If you have new research that clearly contradicts the conclusions that were in the DOE LNG export study that it included in its NEPA reviews for these export terminals, then you certainly could have a challenge for a subsequent review where DOE includes that same report," she said.

Further, Matthews said, the court's 2017 Freeport opinion — the basis for all of its recent LNG decisions — can be read narrowly. The court rejected the environmental group's calls for enhanced agency analysis of particular indirect impacts of greenhouse gas emissions, but it didn't weigh in on a couple of other Sierra Club concerns.

First, the D.C. Circuit did not address the propriety of DOE using non-NEPA documents like the life-cycle study and an environmental addendum to support its export approval. Further, the court did not decide whether the government unlawfully segmented its review, failing to weigh the true scope of impacts from the collection of export applications. The court wrote that those particular issues were not squarely raised by the petitioners.

"So there's no precedent in the D.C. Circuit or anywhere else holding that that's necessarily lawful," Matthews said. "So while we remain disappointed by the D.C. Circuit's opinions, I think it's important to remember that those decisions were fairly caveated and narrow."

Indeed, the Sierra Club and its local partners are already pushing those issues in opposition to LNG proposals in Brownsville, Texas, and the Jordan Cove project in Oregon.

Bookbinder, the Niskanen attorney, cautioned that the environmental community's safest bet for increased climate analysis on exports may simply be to wait for a more favorable White House.

"For the next three years, I think we're in a holding pattern," he said. "They're not going to get much good out of FERC or DOE. I think they're going to have to now wait until there's a better administration that takes the emissions question a lot more seriously and one that takes the life-cycle emissions of LNG more seriously."

The industry, conversely, is looking to capitalize on the Trump administration's "energy dominance" agenda by advocating for a more streamlined DOE approval process.

"There's an argument to be made based on what we know about supply that you could essentially treat all exports as 'in the good of the public interest,'" said Riedl, the Center for LNG director.

In other words, the center thinks exports to countries without free-trade agreements with the United States — "non-FTA countries" — could skip the in-depth review process and instead receive the routine approval granted for shipments to free-trade partners.

Though it's not clear the administration is actually considering streamlining for the types of large-volume export proposals at issue in the recent D.C. Circuit cases, DOE is working on a rule to expedite permitting for small-scale exports to non-FTA countries.

Riedl says he's confident both small- and large-scale reforms would withstand legal scrutiny.

As for the Sierra Club? It sees another potential battleground.

Viruses are falling from the sky

by Staff Writers

Vancouver, Canada (SPX) Feb 09, 2018

Viruses and bacteria fall back to Earth via dust storms and precipitation. Saharan dust intrusions from North Africa and rains from the Atlantic.

An astonishing number of viruses are circulating around the Earth's atmosphere - and falling from it - according to new research from scientists in Canada, Spain and the U.S.

The study marks the first time scientists have quantified the viruses being swept up from the Earth's surface into the free troposphere, that layer of atmosphere beyond Earth's weather systems but below the stratosphere where jet airplanes fly. The viruses can be carried thousands of kilometres there before being deposited back onto the Earth's surface.

"Every day, more than 800 million viruses are deposited per square metre above the planetary boundary layer--that's 25 viruses for each person in Canada," said University of British Columbia virologist Curtis Suttle, one of the senior authors of a paper in the International Society for Microbial Ecology Journal that outlines the findings.

"Roughly 20 years ago we began finding genetically similar viruses occurring in very different environments around the globe," says Suttle.

"This preponderance of long-residence viruses travelling the atmosphere likely explains why--it's quite conceivable to have a virus swept up into the atmosphere on one continent and deposited on another."

Bacteria and viruses are swept up in the atmosphere in small particles from soil-dust and sea spray.

Suttle and colleagues at the University of Granada and San Diego State University wanted to know how much of that material is carried up above the atmospheric boundary layer above 2,500 to 3,000 metres. At that altitude, particles are subject to long-range transport unlike particles lower in the atmosphere.

Using platform sites high in Spain's Sierra Nevada Mountains, the researchers found billions of viruses and tens of millions of bacteria are being deposited per square metre per day. The deposition rates for viruses were nine to 461 times greater than the rates for bacteria.

"Bacteria and viruses are typically deposited back to Earth via rain events and Saharan dust intrusions. However, the rain was less efficient removing viruses from the atmosphere," said author and microbial ecologist Isabel Reche from the University of Granada.

The researchers also found the majority of the viruses carried signatures indicating they had been swept up into the air from sea spray. The viruses tend to hitch rides on smaller, lighter, organic particles suspended in air and gas, meaning they can stay aloft in the atmosphere longer.

Saudi Arabia Makes Big Push to Boost Domestic Renewable Energy

February 7, 2018

The world’s largest exporter of oil, Saudi Arabia, is shifting away from fossil fuels toward renewable energy. The country’s government plans to invest up to $7 billion in seven new solar plants and a wind farm by the end of the year, with a goal to get 10 percent of its power from renewables by 2023, The New York Times reported.

This week, the government made a deal with ACWA Power, a Saudi energy company, to build a $300 million solar farm in Sakaka, in northern Saudi Arabia, that will power 40,000 homes. The project received bids as low as 2 to 3 cents per kilowatt-hour — lower than the cost of electricity generated from fossil fuels, according to Turki al-Shehri, head of the kingdom’s renewable energy program.

“All the big developers are watching Saudi,” Jenny Chase, an analyst at Bloomberg New Energy Finance, a market research firm, told The Times.

But while Saudi Arabia’s renewable energy push will decrease the country’s own carbon emissions, it won’t impact global emissions since it will allow the government to export more of its oil to other nations. According to The Times, Saudi Arabian power plants burned an average 680,000 barrels of oil a day last June, when scorching temperatures increase demand for air conditioning. Had this crude been sold abroad, it could have generated an extra $47 million a day for the government.

Can Deepwater Aquaculture Avoid the Pitfalls of Coastal Fish Farms?

Near-shore fish farms have created a host of environmental problems. Now, U.S. aquaculture advocates – backed by mainstream conservation groups – are saying that locating well-run operations out in the ocean could produce sustainable food and protect wild stocks from overfishing.

By Marc Gunther • January 25, 2018

Donna Lanzetta has a big idea: She wants to grow striped bass on a deepwater fish farm, about eight miles off the coast of Southampton, Long Island, where she was born and raised.

A lawyer who knows real estate and politics, Lanzetta has garnered the support of local and state officials. Marine scientists and aquaculture experts advise her startup, which is called Manna Fish Farms. She has purchased an automated feed system that can be operated from shore, and plans to rely on hatchlings that are identical to wild striped bass, to ease concerns about escapes.

Now all she needs to do is raise a couple of million dollars, persuade a half-dozen or so federal agencies to grant her a permit, and, quite possibly, get an act of Congress to exempt her business from a law, aimed at protecting wild fish stocks, that makes it a crime to possess striped bass in federal waters.

“It’s not easy to be blazing the trail,” says Lanzetta.

Nothing is easy about developing aquaculture projects in U.S. federal waters, which cover the area between three and 200 miles offshore. U.S. fish farmers grow seafood in lakes, ponds, tanks, and coastal waters regulated by states, but except for a handful of shellfish farms, they don’t raise fish farther out in the oceans.

Open-ocean aquaculture has advantages over fish farms in bays and estuaries, which have fouled coastal ecosystems.

That’s unfortunate, some environmentalists say, because open-ocean aquaculture has decided advantages over fish farms in bays and estuaries, which have caused significant environmental problems as concentrations of fish waste and sea lice foul near-shore ecosystems. And today, offshore aquaculture supporters point out, data-rich geographic information systems enable companies and regulators to make smarter decisions about where to locate fish farms, so that stronger currents and deeper waters can dilute and wash away waste and pollution. Offshore projects are also less likely to provoke opposition from shoreline property owners. (Lanzetta’s farm, for example, will not be visible from the shore.)

Lanzetta is one of a very few would-be ocean-aquaculture pioneers in the United States. Don Kent, the president of the Hubbs-Sea World Research Institute in San Diego, has been trying for a decade to develop a fish farm that is now planned for a site four miles from the southern California coast. Marine biologist Neil Anthony Sims, the co-founder of an aquaculture startup called Kampachi Farms, plans to run a short-term pilot project, growing yellowtail 30 miles off the coast of southwest Florida.

They all face daunting obstacles. Regulatory uncertainty is perhaps the biggest: A half dozen or so federal agencies — including the National Oceanic and Atmospheric Administration (NOAA), the Environmental Protection Agency, the Army Corps of Engineers, the Navy, the Coast Guard, and the U.S. Fish and Wildlife Service — share permitting responsibilities. “A fearsome Gordian knot of overlapping jurisdictions and responsibilities” is how Sims describes it.

There are also logistical challenges. Open-ocean aquaculture pens must operate in rougher waters than their near-shore counterparts, increasing the risks of fish escapes — a concern among some biologists who fear that escaping Atlantic salmon, for example, might compete for food and spawning grounds with Pacific salmon. Locating a fish farm offshore is also more expensive, although ocean farms growing cobia and striped bass are currently operating successfully in Panama and Mexico.

While some critics — including recreational and commercial fishermen, inland aquaculture firms, and activist environmental groups — strongly oppose open-ocean aquaculture, big, mainstream, more business-friendly environmental nonprofits are now working with governments and businesses around the world to support responsible aquaculture. They see fish farming as a way to deliver healthy protein to a growing global population, while also protecting wild fish stocks from overfishing.

“Aquaculture, when done properly, can be a major tool for conservation,” says Jerry Schubel, an oceanographer who is president of the Aquarium of the Pacific in Long Beach, California. “It can conserve wild stocks. It can conserve habitat. We have a great opportunity to show the rest of the world how it can be done in a sustainable way.”

WWF has long supported sustainable aquaculture, helping to start a standard-setting organization called the Aquaculture Stewardship Council. Seven years ago, Conservation International released a comprehensive global study of fish farming called Blue Frontiers that spotlighted problems with poorly run fish farms, but concluded that aquaculture “has clear advantages over other types of animal source food production for human consumption.” The Nature Conservancy began an aquaculture program two years ago; it is tracking the impact of shellfish and eelgrass farms in Chesapeake Bay and Tomales Bay, California, that have the potential to restore marine ecosystems.

“The shellfish and the [eelgrass] are a gateway,” says Robert Jones, the global lead for aquaculture at The Nature Conservancy. “We’re looking to get involved with finfish… It’s a huge opportunity for the planet, and it’s also a huge challenge.”

Adds Schubel, “We have the science, the technology, the ocean waters — the largest exclusive economic zone of any country in the world — and we have the demand. No matter how you look at it, the U.S. should get involved in marine aquaculture.”

Advocates argue that it’s better to grow fish responsibly, close to home, than to import seafood from places where regulations are weak.

The environmental community, though, remains divided. Oceana, which works to preserve oceans, has no position on U.S. offshore aquaculture, but it has campaigned against farmed salmon from Chile, saying Chile’s salmon farmers are overusing antibiotics, among other things. Food and Water Watch, a Washington-based non-profit, says fish farms are “generally big, dirty and dangerous, just like factory farming on land.” It argues that private companies should not be allowed “to exploit our public resource for their financial benefit.”

When NOAA put forth a plan to streamline the permitting process for ocean aquaculture in the Gulf of Mexico in 2016, Food and Water Watch, the Center for Food Safety, and groups of commercial and recreational fishers in the Gulf filed suit. “Look at salmon farming anywhere on the planet,” says Marianne Cufone, the founder of the Recirculating Farms Coalition, which represents small-scale fish farms that recycle their water. “It has been disastrous, with diseases, escapes, and pollution.”

Even staunch supporters of U.S. offshore aquaculture don’t defend practices elsewhere. To the contrary, they argue that it’s better to grow fish responsibly, close to home, than to import seafood from places where regulations are weak or poorly enforced. China, Indonesia, India, Vietnam, and the Philippines are the world’s biggest aquaculture producers, according to the U.N.’s Food and Agriculture Administration. Bangladesh alone produces four times more farmed seafood than the U.S.

Manna Fish Farms hopes to use this automated feed device at the deepwater fish farm it is planning off Long Island, New York. Courtesy of Donna Lanzetta

“If you did more aquaculture in the U.S., it could be better regulated and it would shorten the supply chain, bringing it close to major cities,” says Jones. The U.S. currently imports about 90 percent of its seafood, creating a so-called seafood trade deficit estimated at more than $14 billion.

Nor do offshore aquaculture advocates deny that coastal aquaculture has caused an array of environmental woes. Near-shore fish-farming operations can concentrate parasites and disease, as salmon infested with sea lice have done in Norway, Chile, and elsewhere, posing a risk to young wild salmon. When stocking densities are too high, fish farms pollute waters with fecal matter and uneaten food. They also use pesticides and antibiotics that may contribute to bacterial resistance that threatens human health. Finally, because fish being raised on farms are often fed fish meal and fish oil derived from small wild fish like menhaden and anchovies, there’s a risk that aquaculture could lead to overfishing of forage fish stocks and disrupt the food chain of the entire marine ecosystem.

Aquaculture proponents maintain, however, that those problems are manageable — especially in offshore aquaculture operations — and, in fact, are being well managed in the U.S. “Low use of fishmeal in feeds, minimal use of antibiotics, well-managed nutrient effluents, and minimal escapes are actually the norm in the U.S., not the exception,” says Mike Rust, science coordinator for the aquaculture office at NOAA. On the issue of fish feed, for example, fish farmers are increasingly turning to vegetarian diets, supplemented with fish oil. A Silicon Valley startup called Calysta has developed a substitute for fishmeal made from methane gas that would otherwise be released into the atmosphere.

Offshore, where pens are battered by stronger currents, escapes remain a concern. Over the years, non-native farmed fish, particularly Atlantic salmon being raised in the Pacific, have escaped into rivers and oceans in Chile and British Columbia. Just last summer, an estimated 160,000 salmon escaped into Puget Sound near Seattle, renewing concerns that they could crowd spawning grounds and compete with wild salmon for food. But state and federal regulators say that such releases do not cause lasting damage because the Atlantic salmon can’t effectively compete with native species. Despite the long-standing concerns about negative impacts from escapes, few farmed salmon have been able to survive in the wild, scientists say.

Scientists have found that farming fish is better for the climate than raising land-based animals, though the data is squishy.

The Aquaculture Stewardship Council (ASC) and the Global Aquaculture Alliance were formed to reward farmers who practice responsible fish farming. Their efforts seem to be paying off: The ASC now certifies nearly 25 percent of global salmon production, up from just 7 percent in 2015, its standards guiding major U.S. retailers such as Walmart and Whole Foods Market. “These standards, if met, will give us not just an expansion of responsibility, but an expansion of our food supply,” says Scott Nichols, an ASC board member.

Scientists also have found that farming fish is better for the climate than raising land-based animals. The data is squishy, but the logic is solid: Fish are much better than cows, pigs, and chickens at converting feed into food because they are cold-blooded and don’t need to use energy to warm themselves, don’t need to fight gravity, and have smaller skeletons. In 2014, Oxford University scientists examined the diets of about 55,000 Britons and found that a pescatarian diet, consisting of fish and vegetables but no meat, generated half the carbon footprint of a diet heavy in meat. Vegetarians did even better, of course.

What’s more, unlike land animals, fish don’t need to be fed fresh water, notes Steve Gaines, the dean of the School of Environmental Science and Management at the University of California, Santa Barbara and an aquaculture expert. “Even if we look at just average practices in aquaculture today, not best practices, they’re substantially better than most forms of land-based agricultural production,” says Gaines. “If you look at best practices in aquaculture, there’s nothing comparable in terms of land-based meat production that has such a low level of environmental impacts.”

Meanwhile, those seeking to pioneer offshore aquaculture in the U.S. are moving ahead. Kent, who is seeking to build the fish farm off the California coast, has been at it for a decade, and says his investors — they once included an aquaculture investment fund backed by the Walton family — have spent more than $1 million. Sims’ Kampachi Farms was recently awarded a $139,000 federal grant to demonstrate aquaculture off the Florida coast. “If the results… are favorable, then we would also hope to pursue a commercial aquaculture permit,” he says.

As for Manna Fish Farm, Lanzetta’s company, it has secured about $300,000 in grants from the state of New York and local government to support its development. She’s hoping to secure the necessary permits this year, deploy equipment next year, get fish onto the site in the spring, and reap her first harvest by the end of 2019.

What to do with "Spent" wind turbine blades?

03 October 2017

George Marsh

Given that there are already an estimated 345,000 utility scale wind turbines installed around the world (based on Global Wind Energy Council statistics) and that most are designed for service lives of 20 to 25 years, there is a looming issue; what should happen to large fibre reinforced plastic (FRP) blades when their service lives have expired?

This is a preview of an article that will appear in a future Volume of Renewable Energy Focus Journal...

10-15 tonnes of FRPs are associated with each Megawatt of rated wind turbine power. This illustrates the scale of the emerging challenge when it comes to disposing of turbine blades at end-of-life (eol).

Clearly, this is no small matter. Most turbine rotors have three blades, ranging in size from 12-15 metres long, typical of the early machine generations, to 80 metres today, an example of the latter being the MHI-Vestas blades now spinning on thirty-two 8MW offshore turbines forming the Burbo Bank extension project off Liverpool Bay in the UK. Although it should be many years before these latter become end-of-life (eol) items, the significant numbers of early-generation blades now reaching or approaching this point will soon require careful disposal, or else some form of second life.

This is not to dismiss the eol issue presented by a wind turbine as a whole, but techniques for recycling concrete and steel elements are well established and the main challenge lies with the composite plastics that most blades are made from. These bi-phase FRP materials have been selected for their weight, structural and aerodynamic (when formed into blades) advantages - not for recyclability. Unfortunately, it is difficult to separate the fibres from the polymer resins in which they are embedded, but only by achieving such separation can the highest value recycling options be secured. Techniques for accomplishing this are in their infancy so majority practice to date has been to adopt options that do not require separation.

One possibility is to recycle blades that have become surplus as a result of re-powering operations, into new installations. A secondary market could develop for original blades which, refurbished as necessary, could be welcomed in territories that have only embarked on the wind power journey more recently. Here less powerful turbines on smaller-scale wind farms, often off-grid, are likely to pertain for some time and pre-used blades with life left in them may prove useful for these. Both Denmark and Germany, wind power pioneers, have experience in this area. Intermediaries already exist that hold stocks of old wind turbines for export and more business opportunities are likely to arise in both blade reclamation and logistics.

However, recycling complete blades is not the only option; parts of blades can be utilised too. Blade sections have been used for bus shelters and public seating in the Netherlands, and for children’s playground items in a number of countries. Marine structures and art installations are among other potential applications.

Denmark was in the vanguard of wind energy adoption and is naturally becoming one of the first countries to face the bulk disposal challenge. There, a working party has been formed to combine the expertise of several significant players and bring it to bear on blade recycling development. Coordinated by the Danish Wind Energy Association, the body has some 20 participants including such luminaries as MHI-Vestas, LM Wind Power and Siemens. The presence of these blade manufacturers is important given the growing influence of the ’producer responsibility’ principle as a driving force in European waste strategy.

In terms of disposal, while volumes are low and blade sizes remain modest, landfill is the dominant option. Blades may be cut up into manageable fragments, but there is rarely more pre-treatment than this. However, plastics in general are problematic due to their low or negligible degradability and the issue is most pronounced with reinforced plastics. With WT blades likely to account for some 50,000 tonnes of waste annually by 2020, rising four-fold by 2034 (according to research quoted by the European Wind Energy Association), landfill is hardly a viable long-term solution. Already access to permissive facilities is becoming limited.

In Germany, authorities have been banning the use of landfill for blade (and other items) disposal for some time and elsewhere punitive charges are levied; landfill tax in the UK, for instance, can already reach over £80/tonne. Moreover, as public awareness of the volumes of landfill involved grows, reputational damage could become worrying for an industry that prides itself on its green credentials.

Incineration

Landfill can be avoided by disposing of FRPs by incineration, an option that is reasonably accessible and affordable. However, the ash residue from the process, though much reduced in bulk, itself then needs to be disposed of in landfill or used in other manufactures such as cement. Some demand can be anticipated for rotor blade ash because it contains high amounts of silica and calcium, two main components of high-grade clinker.

Combustion processes need to be efficiently controlled to avoid troublesome emissions. Moreover, most incinerators cannot accommodate such large items, so rotor blades would generally have to be cut up into sizes that would fit.

Material re-use

An attractive alternative to outright disposal is to use recovered material in secondary products such as aggregate and cement. There is considerable potential for reducing decommissioned blades into smaller fragments or particles that can be used to bulk out and enhance the properties of such products used in construction and related industries. Alternatively, they can be utilised as a fuel. Fibreglass fragments derived from rotor blades, boats and other eol composite items are already being so used in concrete furnaces, the ensuing ash being assimilable into concrete as a bulking agent in which short fibres may also have some reinforcement benefit.

A Danish large-scale manufacturer of pultruded fibreglass profiles, Fiberline Composites A/S, points out that there is a natural link between cement and fibreglass. Both, it says, contain sand (silica), so glass fibre fragments make a compatible addition to natural sand used in cement while resin content can be useful as fuel in the energy-intensive cement making process.

Pursuing a zero landfill, zero energy goal, Fiberline is part of a collaboration in which it sends its waste to German companies Zajons Logistik and Neocomp GmbH, where large fibreglass fragments are reduced to granules in giant crushers. Calorific value is adjusted by adding other types of recyclate. The resulting granular material is sent to cement manufacturer Holcim AG, which includes it in the feed to its cement making kilns. Holcim claims that recycling 1,000 tonnes of fibreglass waste into cement in this way saves up to 450 tonnes of coal, 200 tonnes of chalk, 200 tonnes of sand and 150 tonnes of aluminium oxide in a process that produces minimal dust, ash or other residues.

In another avenue, source material can be further ground into a powder that can then be used in other ways, for example in the production of thermoformed moulds.

The rest of this Industry Perspective article will appear in Volume 22-23 of Renewable Energy Focus Journal..

Marc Gunther has reported on business and sustainability for Fortune, The Guardian, and GreenBiz. He now writes about foundations, nonprofits, and global development on his blog, Nonprofit Chronicles.

Utilities Bury More Transmission Lines to Prevent Storm Damage

Facing hurricanes and public opposition to overhead lines, utilities are paying extra to go underground

By Peter Fairley

In the past six months, transmission lines have been destroyed by hurricanes in Puerto Rico, singed by wildfires in California, and bitterly opposed by residents in Utah and Pennsylvania who want to stop utilities from building more.

Such problems have grid operators literally thinking deeper. Increasingly, utilities in the United States and elsewhere are routing power underground. Puerto Rico’s grid rebuild is a prime example: A proposal, crafted by an industry-government consortium late last year, calls for “undergrounding” transmission to harden a power system still recovering from Hurricanes Irma and Maria.

Much of the plan’s outlay for transmission—US $4.3 billion—would create hardy overhead circuits interspersed with underground cables in areas where gusts could snap even the strongest lines and towers. A $601 million line item also provides for a buried high-voltage direct current (HVDC) cable around the territory’s southeast corner, where most big storms strike first. This underground bypass would create a secure path from the island’s most efficient power plants to the heavily populated area around San Juan.

By heading below ground, transmission grids are following a path laid by their lower-voltage cousins—distribution grids. In some cities, power distribution occurs entirely out of sight. This is possible thanks to specialized cables, whose metal conductors are wrapped in cross-linked polyethylene, a heat—stable insulator, as well as metal and polymer layers providing electrical shielding, impermeability to water, and puncture resistance.

Utilities have been slower to bury transmission because of the expense, according to power consultant Ken Hall, a former transmission and distribution director at the Edison Electric Institute, a Washington, D.C.–based utility trade group. Transmission lines operate at higher power levels than distribution lines and generate more heat, which is harder to dissipate underground.

Doing it properly can mean burying up to a dozen cables at a time to carry the current, and specifying thick copper conductors that have lower resistance and generate less heat than the cheaper aluminum employed in overhead lines. Each cable must be shipped in roughly 1-kilometer-long links and stitched together on-site, adding further to the tab.

Tally it all up, says Hall, and underground transmission costs roughly 5 to 10 times as much per kilometer as overhead circuits. “Every utility in the United States has underground distribution. But not every utility has underground transmission,” he notes.

Despite the expense, utilities are now investing more in underground transmission, prodded by regulators and public outcry. Denmark was among the first to mandate it in 2008, requiring most new AC and HVDC transmission to be routed underground, with the exception of the highest-voltage, ­400-kilovolt AC lines. (Burying high-voltage AC is harder than burying HVDC, largely because AC flows with greater resistance and thus generates more heat.)

In 2015, Germany mandated underground transmission for HVDC systems unless they could be strung alongside existing power lines. Most of the HVDC projects that Germany is counting on to supply North Sea wind power to southern cities, originally designed as overhead lines, are now being replanned as underground links.

Germany’s reforms also encourage burying short segments of high-voltage AC to reduce public conflicts for transmission routes through towns and scenic areas. Operators in Germany are planning or have completed 11 AC pilot projects, according to ­Heinrich Laun, a regional manager for ­Bürgerdialog Stromnetz, a publicly funded initiative that helps communities understand and negotiate grid planning.

In the United States, utilities have promised to bury lines along nearly one-third of the ­long-disputed 309-kilometer Northern Pass ­project—a set of AC and HVDC links meant to deliver more Canadian hydropower to the northeastern United States. The intent was to eliminate the “visual impacts” of tall towers and suspended lines on New Hampshire’s White Mountains, which are popular with tourists. State regulators will vote on the project in February.

For all its benefits, Laun says undergrounding has also caused new headaches for transmission operators. Farmers can plant and plow over buried cables, but ­Germany’s farm lobbies are concerned with potential soil impacts from cable heat and altered drainage, and have demanded compensation.

Replanning underground cables has already delayed most new HVDC projects, Laun says, which were supposed to provide an alternate supply of electricity for the south before Germany’s last nuclear power plants are shut down in 2022. Now, most HVDC projects are not expected until 2025, and schedules could slip further behind if faced with more opposition.

To Laun, these short-term challenges will beget long-term acceptance of new transmission lines that are buried out of sight. He says this acceptance is critical if Germany is to build the many more lines necessary to integrate more renewable power into its grid. “In the long run, it’s worth it,” he says.

Reduced energy from the sun might occur by mid-century: Now scientists know by how much

Date:

February 6, 2018

Source:

UC San Diego

Summary:

The Sun might emit less radiation by mid-century, giving planet Earth a chance to warm a bit more slowly but not halt the trend of human-induced climate change.

The Sun might emit less radiation by mid-century, giving planet Earth a chance to warm a bit more slowly but not halt the trend of human-induced climate change.

The cooldown would be the result of what scientists call a grand minimum, a periodic event during which the Sun's magnetism diminishes, sunspots form infrequently, and less ultraviolet radiation makes it to the surface of the planet. Scientists believe that the event is triggered at irregular intervals by random fluctuations related to the Sun's magnetic field.

Scientists have used reconstructions based on geological and historical data to attribute a cold period in Europe in the mid-17th Century to such an event, named the "Maunder Minimum." Temperatures were low enough to freeze the Thames River on a regular basis and freeze the Baltic Sea to such an extent that a Swedish army was able to invade Denmark in 1658 on foot by marching across the sea ice.

A team of scientists led by research physicist Dan Lubin at Scripps Institution of Oceanography at the University of California San Diego has created for the first time an estimate of how much dimmer the Sun should be when the next minimum takes place.

There is a well-known 11-year cycle in which the Sun's ultraviolet radiation peaks and declines as a result of sunspot activity. During a grand minimum, Lubin estimates that ultraviolet radiation diminishes an additional seven percent beyond the lowest point of that cycle. His team's study, "Ultraviolet Flux Decrease Under a Grand Minimum from IUE Short-wavelength Observation of Solar Analogs," appears in the publication Astrophysical Journal Letters and was funded by the state of California.

"Now we have a benchmark from which we can perform better climate model simulations," Lubin said. "We can therefore have a better idea of how changes in solar UV radiation affect climate change."

Lubin and colleagues David Tytler and Carl Melis of UC San Diego's Center for Astrophysics and Space Sciences arrived at their estimate of a grand minimum's intensity by reviewing nearly 20 years of data gathered by the International Ultraviolet Explorer satellite mission. They compared radiation from stars that are analogous to the Sun and identified those that were experiencing minima.

The reduced energy from the Sun sets into motion a sequence of events on Earth beginning with a thinning of the stratospheric ozone layer. That thinning in turn changes the temperature structure of the stratosphere, which then changes the dynamics of the lower atmosphere, especially wind and weather patterns. The cooling is not uniform. While areas of Europe chilled during the Maunder Minimum, other areas such as Alaska and southern Greenland warmed correspondingly.

Lubin and other scientists predict a significant probability of a near-future grand minimum because the downward sunspot pattern in recent solar cycles resembles the run-ups to past grand minimum events.

Despite how much the Maunder Minimum might have affected Earth the last time, Lubin said that an upcoming event would not stop the current trend of planetary warming but might slow it somewhat. The cooling effect of a grand minimum is only a fraction of the warming effect caused by the increasing concentration of carbon dioxide in the atmosphere. After hundreds of thousands of years of CO2 levels never exceeding 300 parts per million in air, the concentration of the greenhouse gas is now over 400 parts per million, continuing a rise that began with the Industrial Revolution. Other researchers have used computer models to estimate what an event similar to a Maunder Minimum, if it were to occur in coming decades, might mean for our current climate, which is now rapidly warming.

One such study looked at the climate consequences of a future Maunder Minimum-type grand solar minimum, assuming a total solar irradiance reduced by 0.25 percent over a 50-year period from 2020 to 2070. The study found that after the initial decrease of solar radiation in 2020, globally averaged surface air temperature cooled by up to several tenths of a degree Celsius. By the end of the simulated grand solar minimum, however, the warming in the model with the simulated Maunder Minimum had nearly caught up to the reference simulation. Thus, a main conclusion of the study is that "a future grand solar minimum could slow down but not stop global warming."

Journal Reference:

Dan Lubin, Carl Melis, David Tytler. Ultraviolet Flux Decrease Under a Grand Minimum from IUE Short-wavelength Observation of Solar Analogs. The Astrophysical Journal, 2017; 852 (1): L4 DOI: 10.3847/2041-8213/aaa124

Chemical cluster could transform energy storage for large electrical grids

The compound's promising electroactive properties make it an ideal candidate material for redox flow batteries

UNIVERSITY AT BUFFALO

CREDIT: DOUGLAS LEVERE / UNIVERSITY AT BUFFALO

BUFFALO, N.Y. -- To power entire communities with clean energy, such as solar and wind power, a reliable backup storage system is needed to provide energy when the wind isn't blowing and the sun isn't out.

One possibility is to use any excess solar- and wind-based energy to charge solutions of chemicals that can subsequently be stored for use when sunshine and wind are scarce. During these down times, chemical solutions of opposite charge can be pumped across solid electrodes, thus creating an electron exchange that provides power to the electrical grid.

The key to this technology, called a redox flow battery, is finding chemicals that can not only "carry" sufficient charge, but also be stored without degrading for long periods, thereby maximizing power generation and minimizing the costs of replenishing the system.

Researchers at the University of Rochester and University at Buffalo believe they have found a promising compound that could transform the energy storage landscape.

In a paper published in Chemical Science, an open access journal of the Royal Society of Chemistry, the researchers describe modifying a metal-oxide cluster, which has promising electroactive properties, so that it is nearly twice as effective as the unmodified cluster for electrochemical energy storage in a redox flow battery.

The research was led by the lab of Ellen Matson, PhD, University of Rochester assistant professor of chemistry. Matson's team partnered with Timothy Cook, PhD, assistant professor of chemistry in the UB College of Arts and Sciences, to develop and study the cluster.

"Energy storage applications with polyoxometalates are pretty rare in the literature. There are maybe one or two examples prior to ours, and they didn't really maximize the potential of these systems," says first author Lauren VanGelder, a third-year PhD student in Matson's lab and a UB graduate who received her BS in chemistry and biomedical sciences.

"This is really an untapped area of molecular development," Matson adds.

The cluster was first developed in the lab of German chemist Johann Spandl, and studied for its magnetic properties. Tests conducted by VanGelder showed that the compound could store charge in a redox flow battery, "but was not as stable as we had hoped."

However, by making what Matson describes as "a simple molecular modification" -- replacing the compound's methanol-derived methoxide groups with ethanol-based ethoxide ligands -- the team was able to expand the potential window during which the cluster was stable, doubling the amount of electrical energy that could be stored in the battery.

Cook's team -- including fourth-year PhD candidate Anjula Kosswattaarachchi -- contributed to the research by carrying out tests that enabled the scientists to determine how stable different cluster compounds were.

"We carried out a series of experiments to evaluate the electrochemical properties of the clusters," Cook says. "Specifically, we were interested in seeing if the clusters were stable over the course of minutes, hours, and days. We also constructed a prototype battery where we charged and discharged the clusters, keeping track of how many electrons we could transfer and seeing if all of the energy we stored could be recovered, as one would expect of a good battery.

"These experiments let us calculate the efficiency of the device in a very exact way, letting us compare one system to another. Because of these studies, we were able to make molecular changes to the cluster and then determine exactly what properties were effected."

Says Matson: "What's really cool about this work is the way we can generate the ethoxide and methoxide clusters by using methanol and ethanol. Both of these reagents are inexpensive, readily available and safe to use. The metal and oxygen atoms that compose the remainder of the cluster are earth-abundant elements. The straightforward, efficient synthesis of this system is a totally new direction in charge-carrier development that, we believe, will set a new standard in the field."

Matson and Cook's research groups have applied for a National Science Foundation grant as part of an ongoing collaboration to further refine the clusters for use in commercial redox flow batteries.

A University of Rochester Furth Fund Award that Matson received last year enabled the lab to purchase electrochemical equipment needed for the study. Patrick Forrestal of the Matson lab also contributed to the study.

The Toxic Truth Behind Mardi Gras Beads

Every year, 25 million pounds of plastic beads made by Chinese factory workers get dumped on the streets of New Orleans

Flame retardants and lead in Mardi Gras beads may pose a danger to people and the environment.

By David Redmon, The Conversation

SMITHSONIAN.COM, MARCH 8, 2017

Shiny, colorful bead necklaces, also known as “throws,” are now synonymous with Mardi Gras.

Even if you’ve never been to the Carnival celebrations, you probably know the typical scene that plays out on New Orleans’ Bourbon Street every year: Revelers line up along the parade route to collect beads tossed from floats. Many try to collect as many as possible, and some drunken revelers will even expose themselves in exchange for the plastic trinkets.

But the celebratory atmosphere couldn’t be more different from the grim factories in the Fujian province of China, where teenage girls work around the clock making and stringing together the green, purple and gold beads.

I’ve spent several years researching the circulation of these plastic beads, and their life doesn’t begin and end that one week in New Orleans. Beneath the sheen of the beads is a story that’s far more complex – one that takes place in the Middle East, China and the United States, and is symptomatic of a consumer culture built on waste, exploitation and toxic chemicals.

The Mardi Gras bead originates in Middle Eastern oil fields. There, under the protection of military forces, companies mine the oil and petroleum, before transforming them into polystyrene and polyethelene – the main ingredients in all plastics.

The plastic is then shipped to China to be fashioned into necklaces – to factories where American companies are able to take advantage of inexpensive labor, lax workplace regulations and a lack of environmental oversight.

I traveled to several Mardi Gras bead factories in China to witness the working conditions firsthand. There, I met numerous teenagers, many of whom agreed to participate in the making of my documentary, “Mardi Gras: Made in China.”

Among them was 15-year-old Qui Bia. When I interviewed her, she sat next to a three-foot-high pile of beads, staring at a coworker who sat across from her.

I asked her what she was thinking about.

“Nothing – just how I can work faster than her to make more money,” she replied, pointing to the young woman across from her. “What is there to think about? I just do the same thing over and over again.”

I then asked her how many necklaces she was expected to make each day.

“The quota is 200, but I can only make close to 100. If I make a mistake, then the boss will fine me. It’s important to concentrate because I don’t want to get fined.”

At that point the manager assured me, “They work hard. Our rules are in place so they can make more money. Otherwise, they won’t work as fast.”

It seemed as if the bead workers were treated as mules, with the forces of the market their masters.

A family catches Mardi Gras beads during the Krewe of Thoth parade down St. Charles Avenue in 2000.

A family catches Mardi Gras beads during the Krewe of Thoth parade down St. Charles Avenue in 2000. (Reuters)

In America, the necklaces appear innocent enough, and Mardi Gras revelers seem to love them; in fact, 25 million pounds get distributed each year. Yet they pose a danger to people and the environment.

In the 1970s, an environmental scientist named Dr. Howard Mielke was directly involved in the legal efforts to phase out lead in gasoline. Today, at Tulane University’s Department of Pharmacology, he researches the links between lead, the environment and skin absorption in New Orleans.

Howard mapped the levels of lead in various parts of the city, and discovered that the majority of lead in the soil is located directly alongside the Mardi Gras parade routes, where krewes (the revelers who ride on the floats) toss plastic beads into the crowds.

Howard’s concern is the collective impact of the beads thrown each carnival season, which translates to almost 4,000 pounds of lead hitting the streets.

“If children pick up the beads, they will become exposed to a fine dusting of lead,” Howard told me. “Beads obviously attract people, and they’re designed to be touched, coveted.”

And then there are the beads that don’t get taken home. By the time Mardi Gras is over, thousands of shiny necklaces litter the streets, and partiers have collectively produced roughly 150 tons of waste – a concoction of puke, toxins and trash.

Independent research on beads collected from New Orleans parades has found toxic levels of lead, bromine, arsenic, phthalate plasticizers, halogens, cadmium, chromium, mercury and chlorine on and inside the beads. It’s estimated that up to 920,000 pounds of mixed chlorinated and brominated flame retardants were in the beads.

How did we get to the point where 25 million pounds of toxic beads get dumped on a city’s streets every year? Sure, Mardi Gras is a celebration ingrained in New Orleans’ culture. But plastic beads weren’t always a part of Mardi Gras; they were introduced only in the late 1970s.

From a sociological perspective, leisure, consumption and desire all interact to create a complex ecology of social behavior. During the 1960s and 1970s in the United States, self-expression became the rage, with more and more people using their bodies to experience or communicate pleasure. Revelers in New Orleans started flashing each other in return for Mardi Gras beads at the same time the free love movement became popular in the United States.

The culture of consumption and ethos of self-expression merged perfectly with the production of cheap plastic in China, which was used to manufacture disposable commodities. Americans could now instantly (and cheaply) express themselves, discard the objects and later replace them with new ones.

When looking at the entire story – from the Middle East, to China, to New Orleans – a new picture comes into focus: a cycle of environmental degradation, worker exploitation and irreparable health consequences. No one is spared; the child on the streets of New Orleans innocently sucking on his new necklace and young factory workers like Qui Bia are both exposed to the same neurotoxic chemicals.

How can this cycle be broken? Is there any way out?

In recent years, a company called Zombeads have created throws with organic, biodegradable ingredients – some of which are designed and manufactured locally in Louisiana. That’s one step in the right direction.

What about going a step further and rewarding the factories that make these beads with tax breaks and federal and state subsidies, which would give them incentives to sustain operations, hire more people, pay them fair living wages, all while limiting environmental degradation? A scenario like this could reduce the rates of cancers caused by styrene, significantly reduce carbon dioxide emissions, and help create local manufacturing jobs in Louisiana.

Unfortunately, as Dr. Mielke explained to me, many are either unaware – or refuse to admit – that there’s a problem that needs to be dealt with.

“It’s part of the waste culture we have where materials pass briefly through our lives and then are dumped some place,” he said. In other words: out of sight, out of mind.

So why do so many of us eagerly participate in waste culture without care or concern? Dr. Mielke sees a parallel in the fantasy told to the Chinese factory worker and the fantasy of the American consumer.

“The people in China are told these beads are valuable and given to important Americans, that beads are given to royalty. And of course [this narrative] all evaporates when you realize, ‘Oh yes, there’s royalty in Mardi Gras parades, there’s kings and queens, but it’s made up and it’s fictitious.’ Yet we carry on with these crazy events that we know are harmful.”

In other words, most people, it seems, would rather retreat into the power of myth and fantasy than confront the consequences of hard truth.

UConn Senate passes environmental general education requirement

By Nicholas Hampton News

Students of all majors supported the proposal last year with 1,200 student signatures, according to Benjamin Breslau, an eighth-semester ecology and evolutionary biology major.

The University of Connecticut Senate agreed to include an Environmental Literacy course as part of the general education requirements Monday evening.

The requirement was first proposed in December 2016 by Jack Clausen, a UConn natural resources and the environment professor, according to David Wagner, an ecology and evolutionary biology professor at UConn.

Wagner said the proposal was assigned to the General Education Oversight Committee for a plan for including the requirement before voting on it. After more than a year of restructuring, the Senate moved to decide on the proposal.

“We’re going to be a leader in educating our students about environmental literacy and giving them the will and the knowledge to go forward and affect change,” Wagner said.

A student’s plan of study is determined by the year they are admitted to the school, so the environmental literacy general education requirement will only affect incoming students, according to the office of the registrar website.

Students of all majors supported the proposal last year with 1,200 student signatures, according to Benjamin Breslau, an eighth-semester ecology and evolutionary biology major. In the last three months there were 700 more signatures collected, he said.

“It’s the responsibility of our university to go ahead with this [requirement] and embrace environmental literacy,” Breslau said. “It’s the best way to educate students for the future.”

Wanjiku Gatheru, a fourth-semester environmental studies major, said that taking an environmental science course at UConn helped her realize the positive change humanity could bring about for the environment.

“This is an opportunity for the university to continue to cultivate a new generation of thinkers,” Gatheru said. “Let us continue to stand at the forefront of this movement and aid our students in making our future livable and equitable.”

Senator Hedley Freake, a UConn nutritional sciences professor, said a task force will be started right away to integrate the requirement into the general education system. The Senate vote on the final implementation will happen in the beginning of the fall with the requirement beginning for the fall semester, Freake said.

“This approach will allow us to respond to the immediate needs to enhance environmental literacy while also ensuring that our actions are aligned with both the current and future general education systems,” Freake said.

Responsible Battery Coalition Launches 2 Million Battery Challenge

February 13, 2018 02:00 PM Eastern Standard Time

WASHINGTON--(BUSINESS WIRE)--A coalition of leading vehicle battery manufacturers, recyclers, retailers and users dedicated to the responsible manufacturing, use and reuse of vehicle batteries launched an initiative today to recover 2 million more batteries with the goal of achieving a recycling rate of 100%. The campaign, called the 2 Million Battery Challenge, is an effort to engage consumers to bring their used vehicle batteries to the nearest participating auto parts retailer to have them properly recycled.

“What has been achieved by this industry is remarkable and stands as an example to others around the world. I applaud them for wanting to do better”

“The latest automotive industry research shows that 12% of consumers still have a dead or unusable vehicle battery at home in a garage or old vehicle and not in the closed recycling loop,” said Pat Hayes, executive director of the Responsible Battery Coalition, the organization leading the effort. “That’s enough batteries to equal the weight of 1,000 semi-trucks or enough to line the length of 8,000 football fields.”

“The recycling of vehicle batteries is one of the great achievements in protecting public and environmental health,” said Ramon Sanchez, Ph.D., of the Harvard University School of Public Health and chair of the Responsible Battery Coalition’s Science Advisory Board. “With 99% of the vehicle batteries in North America currently being recycled, we are reducing pollution including the greenhouse emissions caused from sourcing new battery materials. Getting the remaining 2 million batteries recycled will make this positive impact even better.”

The event, sponsored by the Senate Auto Caucus, marks the launch of the campaign and includes a panel discussion featuring members of the Responsible Battery Coalition, its partners and expert advisors.

Sen. Rob Portman (R-OH), co-chair of the caucus, commended the Responsible Battery Coalition’s members for their environmental stewardship. “What has been achieved by this industry is remarkable and stands as an example to others around the world. I applaud them for wanting to do better,” he said.

The 2 Million Battery Challenge will utilize a combination of online advertising and social media engagement to inform consumers that their used batteries can and should be properly disposed at a location near them. These locations are often automotive aftermarket retailers or municipal recycling centers. “We want to make this as easy as possible for people,” said Hayes. “Our campaign directs consumers to a page on our website that will allow them to locate the collection center nearest them. All they need to do is bring the battery in and our partners will do the rest.”

Panelists at the Senate briefing included:

Pat Hayes, Executive Director, Responsible Battery Coalition

Ramon Sanchez, Ph.D., Director, Sustainable Technologies and Health Program, Harvard T.H. Chan School of Public Health

Adam Muellerweiss, Executive Director – Sustainability, Industry, and Government Affairs, Johnson Controls

Jonathan Moser, Head, Environment and Public Affairs – Canada, Lafarge Canada

Ray Pohlman, Vice President, Government and Community Relations, AutoZone

Micah Thompson, Senior Manager, Environmental Affairs, Advance Auto Parts

Responsible Battery Coalition

Launched in April 2017, the RBC is a coalition of companies, academics and NGOs committed to the responsible management of the batteries of today and tomorrow, advancing the responsible production, transport, sale, use, reuse, recycling and resource recovery of transportation, industrial and stationary batteries and other energy storage devices. RBC’s members include Johnson Controls, Walmart, Ford Motor Co., Honda North America, Federal Express, Advance Auto Parts, AutoZone, Canadian Energy, O’Reilly Auto Parts, Club Car and LafargeHolcim.

Could plant-based plastics help tackle waste pollution?

By Suzanne Bearne, Technology of Business reporter

9 February 2018

Potato starch and plant fibres can be turned into compostable plastics

We know that plastic waste is a big problem for the planet - our oceans are becoming clogged with the stuff and we're rapidly running out of landfill sites. Only 9% is recycled. Burning it contributes to greenhouse gas emissions and global warming. So could plant-based alternatives and better recycling provide an answer?

We have grown to rely on plastic - it's hardwearing and versatile and much of our modern economy depends on it. And for many current uses there are simply no commercially viable biodegradable alternatives.

The humble single-use drinking straw is a case in point. Primaplast, a leading plastic straw manufacturer, says "greener" alternatives cost a hundred times more to make.

Are we stuck with plastic straws?

Takeaway coffee cups are another example. In the UK alone we throw away around 2.5 billion of them every year, many of us thinking that they are recyclable when they're not, due to a layer of polyethylene that makes the cup waterproof.

One company trying to change this is Biome Bioplastics, which has developed a fully compostable and recyclable cup using natural materials such as potato starch, corn starch, and cellulose, the main constituent of plant cell walls. Most traditional plastics are made from oil.

Biome Bioplastics has developed a full compostable takeaway coffee cup

"Many consumers buy their cups in good faith, thinking they can be recycled," says Paul Mines, the firm's chief executive.

"But most single-use containers are made from cardboard bonded with plastic, which makes them unsuitable for recycling. And the lids are most often made of polystyrene, which is currently not widely recycled in the UK and ends up in a landfill."

The company has created a plant-based plastic - called a bioplastic - that is fully biodegradable and also disposable either in a paper recycling or food waste bin.

Mr Mines believes it's the first time a bioplastic has been made for disposable cups and lids that can cope with hot liquids but which is fully compostable and recyclable. The cup isn't yet on the market, but Mr Mines says he is in talks with a number of retailers.

"It's not feasible to get rid of plastics completely," says Mr Mines, "but instead replace some of the petroleum-based plastic for biopolymers derived from plant-based sources."

Plenty of other companies and research institutes, such as Full Cycle Bioplastics, Elk Packaging and VTT Technical Research Centre of Finland, are working on similar biopolymer solutions that are more environmentally friendly but equally functional as conventional plastic.

And Toby McCartney's firm MacRebur has developed a road surface material made from an asphalt mix and pellets of recycled plastic. The plastic mix replaces much of the oil-based bitumen traditionally used in road building.

"What we are doing is solving two world problems with one simple solution - the problems we see with the waste plastic epidemic, and the poor quality of the potholed roads we drive on today," claims Mr McCartney.

Since its launch two years ago, the Lockerbie-based firm's hybrid material has been used to build roads across the UK, from Penrith to Gloucestershire.

More than five trillion pieces of plastic are floating in our oceans, by some estimates, much of which can take up to 1,000 years to degrade fully. As it breaks up over time, tiny pieces end up being eaten marine creatures.

Scientists are particularly worried about the threat to larger filter feeders, such as sharks, whales and rays. Toxins in the plastics pose a serious health risk to them, they warn. Plastic waste has even reached the Arctic.

So governments and businesses are starting to act.

The UK has committed to eliminating all avoidable plastic waste by 2042, while France has introduced a ban on single-use plastic bags. Norway has been operating a plastic bottle deposit scheme for decades - shoppers receive one krone (9p) back when they deposit a bottle in a collection machine. The UK is considering following suit.

UK 'could adopt' Norway recycling system

And supermarkets are trying to reduce the amount of packaging they use, with Tesco aiming for all of it to be recyclable or compostable by 2025.

What is your chosen supermarket doing to fight plastic?

But one of the problems with plastics is that many are not easily recyclable.

Over in San Jose, California, Jeanny Yao and Miranda Wang, both 23, are focusing on tackling plastics bags and product packaging that is too difficult to recycle.

"Such plastics are heavily contaminated and cannot be recycled by state-of-the-art mechanical recycling," explains Ms Wang.

Their start-up BioCellection breaks down these unrecyclable plastics into chemicals that can be used as raw materials for a variety of products, from ski jackets to car parts.

"We have identified a catalyst that cuts open polymer chains to trigger a smart chain reaction," she explains.

"Once the polymer is broken into pieces with fewer than 10 carbon atoms, oxygen from the air adds to the chain and forms valuable organic acid species that can be harvested, purified, and used to make products we love."

Helen Bird of charity Waste Recycling Action Programme (Wrap) thinks businesses need to be encouraged to used less coloured plastic, as it's far harder to recycle.

"As a rule of thumb, the clearer the plastic, the greater the opportunity of being recycled into another product or packaging," she says.

Governments need to encourage business to use recyclable packaging and to label it clearly for customers, she argues.

In our throwaway society, there haven't been many financial incentives to develop alternative compostable materials. And the idea that we are suddenly going to wean ourselves off oil-based plastics is fanciful, most experts agree.

"In the next few decades, the global middle class is expected to double," says Ms Wang. "Although we can develop more compostable plastics, there is no way we can stop consuming plastics, which have properties that no other material on earth has."

While awareness of the plastic waste issue is clearly growing, no-one is claiming there's an easy answer to the problem.

Technology of Business will examine how business is reacting to the growing governmental and societal pressure for more sustainable plastics in future articles.

Evolving Assessments of Human and Natural Contributions to Climate Change

FEBRUARY 14, 2018

As Congress continues to deliberate whether and how to address climate change, a key question has been the degree to which humans have influenced observed global climate change. Members of Congress sometimes stress that policies or actions “must be based on sound science.” Officials in the Trump Administration have expressed uncertainty about the human influence, and some have called for public debate on the topic.

To help inform policymaking, researchers and major scientific assessment processes have analyzed the attribution of observed climate change to various possible causes. Scientific assessments of both climate change and the extent to which humans have influenced it have varied in expressed confidence over time but have achieved greater scientific consensus. The latest major U.S. assessment, the Climate Science Special Report (CSSR), was released in October 2017 by the U.S. Global Change Research Program (USGCRP). It stated:

“It is extremely likely [>95% likelihood] that human influence has been the dominant cause of the observed warming since the mid-20th Century. For the warming over the last century, there is no convincing alternative explanation supported by the extent of the observational evidence.”

This CRS report provides context for the CSSR’s statement by tracing the evolution of scientific understanding and confidence regarding the drivers of recent global climate change.

Report:

https://www.everycrsreport.com/files/20180201_R45086_1dd70e8c3d428078830b3905a6925bfe63dc053d.pdf

Pennsylvania Coal Plants Dump Toxic Pollution Under Expired Permits

Residents and advocacy groups fight to bring polluters into compliance

BY JONATHAN HAHN | FEB 15 2018

For as long as she can remember, Laura Jacko has considered Pennsylvania’s lakes and rivers her home. She grew up in Erie, near the Lake Erie peninsula. About 10 years ago, she moved to Pittsburgh, a city in which three rivers—the Allegheny, the Ohio, and the Monongahela—converge like a wishbone. The topography of the sprawling metropolis known as “Steel City” is defined as much by its bridges—some 446 in all—as its industry. Jacko, 32, spent summers kayaking under some of them and taking walks along the Allegheny’s sloshing banks. For two seasons she was a member of the Steel City Rowing Club.

“The great thing about living near water is that there are so many things to do, and so many ways to connect with nature,” she says. “It was so quiet and peaceful and wonderful to get out on the river, in the middle of a city, with a bunch of other people and have the water right there next to you.”

After Jacko got married, she and her husband bought a house in Verona—a borough of Allegheny County about 12 miles from downtown Pittsburgh. “We absolutely love it here,” she says. “It’s everything that you would want out of a place to raise a family. All the neighbors know each other. The kids play in the street. There are little lemonade stands in the summertime.” The Allegheny River, which is also the family’s primary source of drinking water, is walking distance from the house. “We’re all in that river all the time. We’re all doing things in the river and taking our water from there,” she says. Even when she was pregnant, she and her husband would go to the nearest Kayak Pittsburgh station and rent a kayak to go out on the water together. Five months ago, they welcomed their first baby.

Last year, Jacko also discovered that Verona is just five miles downstream from one of the largest single sources of air and water pollution in Allegheny County: the Cheswick Generating Station, a coal-fired power plant. She was even more concerned when she learned that since 2012, the plant has been emitting toxic heavy metals and chemicals under an expired water pollution permit.

“I was shocked to find out that not as many people knew about the power plant as I would’ve thought,” she says. “It seems like it would be a pretty big deal, but it’s not general public knowledge here. You hear about Flint, which is such a ghastly, horrible thing that the entire nation is looking at it, only to discover that there are so many other situations just like that right here in our own backyards.”

The Cheswick Generating Station has been in operation since 1970. Currently owned by NRG Energy, the plant provides electricity for nearly half a million homes by burning coal. When power plants like Cheswick burn coal to produce electricity, they also generate air and water pollution with the potential for devastating impacts on public health and the environment.

There are 20 coal-fired power plants in Pennsylvania, all of which emit heavy metals like mercury, arsenic, lead, and selenium. These heavy metals can accumulate in fish or contaminate drinking water, with the potential to cause adverse affects in people who consume the water or fish, including cancer, cardiovascular disease, kidney and liver damage, and lower IQs in children. By volume, Cheswick’s largest pollutant is sulfuric acid, a key component in acid rain.

Coal ash, another dangerous byproduct of burning coal, is often stored in ash pits known as ponds. In some cases, such as FirstEnergy Corp.’s Bruce Mansfield Power Plant, sited along the Ohio River, those coal-ash ponds and landfills are unlined, increasing the potential for heavy metals such as arsenic to leech into groundwater, which then contaminates streams, rivers, and other water bodies. The Bruce Mansfield coal-slurry pond known as Little Blue Run is the largest in the world. FirstEnergy deactivated it after the Pennsylvania Department of Environmental Protection (DEP) mandated the site’s closure, and the company is now shipping its coal ash to another unlined pit in West Virginia.

Coal-fired power plants in Pennsylvania must secure National Pollutant Discharge Elimination System permits from the Pennsylvania DEP. Those permits set a clear limit for how much pollution these facilities are allowed to release into the environment. Under the Clean Water Act, the permits must be updated and reissued every five years—a review process ensures that the facilities are utilizing the most up-to-date technologies for minimizing the harmful pollution they produce.

In Pennsylvania, that review process has fallen apart.

For years, Cheswick has been operating under an expired water pollution permit (its last one, issued in 2007, expired in 2012). And Cheswick is not the only one. Last June, the Sierra Club, PennFuture, and Lower Susquehanna Riverkeeper Association sued the DEP for allowing 10 coal-fired plants around the commonwealth to discharge toxic pollutants into waters and streams under outdated water pollution permits. Last month, those groups settled the case after the DEP agreed to enforce its own rules and compel those facilities to secure updated permits.

“By allowing these plants to continue to rely on outdated technologies and unreasonable lax mitigation standards, DEP is permitting ongoing degradation of vital Pennsylvania water bodies, aquatic life, and the environment in contravention of the federal Clean Water Act, the Pennsylvania Clean Streams statute, and the Pennsylvania constitution,” says Ted Evgeniadis, the Susquehanna Riverkeeper for the Lower Susquehanna Riverkeeper Association. “Unless the DEP will agree to an enforceable timeline, which they now have, our water bodies will continue to be affected.”

According to the EPA’s 2016 Toxic Release Inventory data, Cheswick is the second-biggest polluter in Allegheny County, as measured by pounds of total air emissions. It also ranks among the highest in terms of the lead it releases. Lead is a dangerous neurotoxin that has profound negative impacts on the human brain and on human development. More than 30,000 people live within three miles of the plant.

“Everything that comes out of this plant flows directly into the Allegheny River, which is one of the primary sources of drinking water not only for the community of Cheswick, but also for many across the greater Pittsburgh region,” says Zachary Barber, the western Pennsylvania field organizer for PennEnvironment, a statewide environmental advocacy organization. “It’s also upriver from one of the primary intakes for the Pittsburgh Water and Sewage Authority. So what comes out of this plant is going directly into our drinking water.”

In 2015, PennEnvironment released a report, “Toxic Ten: The Allegheny County Polluters That Are Fouling Our Air and Threatening Our Health,” examining air quality data for the largest industrial polluters in Allegheny County. The report investigated not just how much these facilities were polluting, but also how toxic that pollution was. The report ranked Cheswick the second-most toxic polluting facility in Allegheny County. (Cheswick procured an updated Title 5 air quality permit after the report was released, and the permit did lead to drastic cuts in some of the major smog-forming pollutants that the facility was emitting.)

“The critical changes that are going to lead to cleaner, safer drinking water for the thousands of people who get their water from the Allegheny River have been delayed, and so it continues to threaten our health,” Barber says. “To say nothing of the air quality impacts as well. The plant is a major contributor to smog pollution, which has a negative quality of life impact.”

During the six years that Cheswick has operated under an expired water pollution permit, not only has the technology changed for minimizing pollution from these facilities, but there have been significant changes in policy. For example, before Cheswick secured its new Title 5 permit, the plant was not required to run all of its existing pollution control systems. Plant operators could leave their NOX controls off, and as a result, release far more pollution than it needed to at a time when Allegheny County was already struggling with high levels of ground-level ozone smog from the very pollution that Cheswick had the ability to control.

Why did it take a lawsuit from environmental and citizen advocacy groups to compel the state’s DEP to enforce its own laws?

“All of the agencies, from the federal to the local level, that deal with enforcing our clean air and clean water protections have faced year after year of budget cuts,” Barber says. “As a result, these cash-strapped agencies don’t have enough environmental cops on the beat or have the regulating staff to keep up with what these facilities are doing. Unfortunately, the reality is that there are too many coal-fired power plants in Pennsylvania operating under expired permits in one form or another because our agencies, whether it’s the EPA or the health department, just don’t have the capacity to get these updated permits out in a timely manner.”

“I hear the same thing as everyone else,” says Evgeniadis. “Not enough resources, not enough boots on the ground. But that’s not an excuse to continue to harm and affect public health and our waters.”

Cheswick is just one of dozens of industrial operations in the greater Pittsburgh area that discharges dangerous pollutants into the air and water. The ATI Flat Rolled Products steel facility in Brackenridge is just 10 miles away from Cheswick. To date, the facility, which has been in operation for 20 years, has never been issued an air quality permit by the local air quality authority. The plant applied for a permit back in the 1990s as it was supposed to. Despite receiving several draft permits with multiple rounds of revision since then, the facility has never been issued a final permit.

Other plants in Allegheny County that are operating under expired permits include the U.S. Steel Clairton Coke Works, the largest coke facility in North America. The plant, located just south of Pittsburgh, has operated under an expired air quality permit since 2012. According to the EPA’s Toxic Release Inventory, Clairton is by far the largest polluter in Allegheny County in pounds of total air emissions. The plant releases roughly five to six times the total tonnage of pollutants that Cheswick emits by weight, including heavy metals, smog pollution, hydrogen sulfide, and lead, as well as some volatile organics. In 2017, the Allegheny County Health Department concluded that the plant had violated air pollution standards some 6,700 times in the previous three and a half years.

The settlement that the Pennsylvania DEP reached with the Sierra Club, PennFuture, and the Lower Susquehanna Riverkeeper Association has produced a timeline for new water pollution permits to be drafted for the 10 coal plants that have been operating under expired permits. Once the permits are drafted, they will be posted to the Pennsylvania Bulletin, triggering a 30-day window for submitting public comments regarding the permit. After that, the Lower Susquehanna Riverkeeper Association plans to request a public hearing.

Meanwhile, in another major victory for Pennsylvania’s environmental advocates, Talen Energy announced this week that it had reached a settlement with the Sierra Club to stop burning coal at its Brunner Island power plant. The plant has been one of the largest polluters in the state since it came online in 1961. Like Cheswick, the plant was also operating under an outdated water pollution permit.

“We believe that the removal of the coal-burning component is a step forward in cleaning up the region and making our communities healthier,” Patrick Grenter, senior campaign representative for the Sierra Club’s Beyond Coal campaign in Pennsylvania and Maryland, said. “We will continue to work toward a transition to clean, renewable energy as quickly as possible.”

For Laura Jacko, that process is a long time coming as she tries to catch up to the reality of what this all means for her, her family, and the city and its rivers she has called home for a decade.

“For all these years I’ve been telling people from outside the city how clean Pittsburgh is now,” she says. “If you look at pictures of it from back in the '30s and '40s, it’s got that smog cloud over it. So many people in the United States think of Pittsburgh that way. I was always telling people 'no no no, it’s so much cleaner now.' You would just think that this would be something that’s in the past by now. It’s a shame that it takes citizen groups and everyday residents paying attention to make sure the state follows its own laws and environmental regulations. It shouldn’t be that way.”

What’s happening at Cheswick is, for Jacko, also personal. One part of her family is from Uniontown, famous for its coal-and-steel mine boom. According to family lore, her great-grandfather was a miner who suffered from black lung disease. He died before she was born. When she was pregnant, that history was very much on her mind—she knew full well that her baby was being born in a part of the country with high asthma rates. Even her husband has asthma. Those fears were soon realized: She recently had to get a breathing apparatus with Albuterol for her five-month-old because he got a cold he can’t kick. Doctors told her his lungs are too reactive.

Yet Jacko says she isn’t going anywhere. “This is our home. It’s a magical place. Pittsburgh is wonderful. I believe good things are worth fighting for. If you don’t fight for your community, who will?”

It shouldn’t take citizen groups to bring lawsuits for the government to do its job, Zach Barber says. “We want to have a system where agencies like the DEP have the resources to vigorously enforce the law and help industrial facilities like this comply with the law,” he says. “We need to get them more resources from Harrisburg as well as make sure we populate the ranks of the departments responsible for enforcing environmental protections with people who are going to do what it takes to make sure communities like Cheswick have clean air to breathe 365 days a year, and clean water to drink every time they turn on the tap.”

Until that happens, it will take residents and community leaders, in addition to advocacy and environmental organizations, to make sure that states like Pennsylvania clean up their act.

Jacko intends to be a part of that process.

“I love where I live,” she says. “I wouldn’t live anywhere else. I think Pittsburgh is one of the best places in the United States. I want to see our city get better. I don’t want people to lose their jobs. I want to see them get better jobs. I want a brighter future for our community.”

Jonathan Hahn is the managing editor of Sierra, covering environmental justice and politics, global trade, energy, and public health. Follow him on Twitter

Bike Sharing competition escalated

What started as healthy competition between two powerful, well-funded Chinese companies and a handful of scrappy American upstarts has intensified into a trash-talking land grab involving electric scooters, electric bikes, and plenty of Silicon Valley-style ambition.

In October, LimeBike the favored competitor of Silicon Valley venture firms Andreessen Horowitz and Coatue Management, raised $50 million in funding. At the time it was in 20 markets around the country, with aggressive plans to expand into 20 more. (The dockless model allows customers to unlock a bike and pay as little as $1 to rent it with their smartphone, leaving it wherever they want when they’re done.) But with Chinese competitors Ofo and Mobike circling, Uber dabbling in electric bikes, and new competitors ranging from auto companies to startups emerging weekly, LimeBike executives believed their plan to double its number of markets wasn’t enough. On Monday, LimeBike deployed its first fleet of electric bikes in Seattle and announced it would soon offer scooter sharing as well.

CEO Toby Sun says LimeBike, now in 46 markets including two European cities, needs to grow even faster than planned. That’s why the company has raised another $70 million as an extension to the October round, bringing its total backing to $132 million. LimeBike would not comment on its new valuation. In October, investors valued the company at $225 million.

Sun says the company’s fast pace of growth is merely meeting demand from users and cities. The company has also seen demand from landlords. A number of commercial and residential real estate owners have become interested in bike sharing as a way to make their properties more valuable, says Brad Greiwe, a managing partner at Fifth Wall Ventures, which focuses on real estate. His firm led LimeBike’s latest fundraising. Greiwe says property owners are eager to adopt dockless bike sharing because it can expand accessibility to buildings, especially ones that aren’t within walking distance to public transit. Further, tenants increasingly expect bike sharing as an amenity, he says, and landlords want to partner to provide hubs and storage areas for bikes.

Competition plays a big role in the company’s hard-charging growth. In recent months Mobike and Ofo, the two dominant Chinese players, have embarked on aggressive expansion in the US. Chinese startups were first to introduce dockless bike sharing, and they have a head start in terms of footprint, brand recognition, and funding raised. Mobike and Ofo are in hundreds of cities around the world and have raised around $1 billion each in venture funding, valuing them in the billions. Their global expansion and insatiable appetite for growth has snapped US cities, many of which operated slow-to-adapt and expensive docked systems, out of a lull. In November Mobike CTO Joe Xia explained his company’s ethos to WIRED in a language familiar to many in Silicon Valley: “Anything we do we want to totally just disrupt.”

Stateside, LimeBike’s competition includes Spin, Motivate, and a variety of regional upstarts. A former Uber and Lyft executive recently launched Bird, an electric-scooter startup. Ford has backed a San Francisco-based bike-sharing program called GoBike, which recently announced it will add electric bikes. And Uber recently announced a bike service called Uber Bike in partnership with New York startup Jump Bikes.

Dockless bike share has quickly spread around the world because companies technically don’t have to wait for a city’s approval to launch. They can just scatter some bikes around the city and encourage people to start riding. But LimeBike is proceeding with more caution, working in tandem with governments and regulators to avoid having their bikes confiscated and to educate riders on where to properly park them. The parking issue is acute—“bike litter” has already invited complaints and regulatory scrutiny in a number of cities.

LimeBike is open to partnering with a ride-sharing company “or other mobility platform,” Sun says, but he would not comment on any ongoing discussions.

The company has been outspoken about its competition. Caen Contee1, head of marketing and partnerships for LimeBike, complained that Ofo is giving rides away for free in Seattle, saying it has hurt LimeBike’s ability to compete. “When a player is doing free rides month after month … it just feels unfair when a company raised a ton of capital and is trying to capitalize that by providing free rides,” he says.

Like most hot categories flooded with new entrants, bike sharing will eventually consolidate. Contee says he’s already seen some smaller startups with less funding narrow their focus to college campuses, which shows a shakeout may be already be in the works.

Chinese bike-sharing companies got a head start in the US, but American startups are battling back.

Dockless bike sharing has the potential to transform American cities.

China's Mobike has more than 8 million bikes in more than 180 cities—and plans for even faster expansion.

Adding rocks to soil can boost crop health and help capture CO2, scientists say

.By Brooks Hays | Feb. 19, 2018 at 3:42 PM

Adding rocks to soil can boost crop health and help capture CO2, scientists say.

Feb. 19 (UPI) -- Just add rocks. In a recent study, scientists at the University of Sheffield showed the addition of reactive silicate rocks to agricultural soil can boost crop production while limiting the amount of CO2 released into the atmosphere.

In addition to capturing CO2, the rocks also protected crops against pests and disease while improving the soil's structure and fertility. Researchers detailed the benefits of adding rocks to cropland soil in a new paper published this week in the journal Nature Plants.

"Human societies have long known that volcanic plains are fertile, ideal places for growing crops without adverse human health effects, but until now there has been little consideration for how adding further rocks to soils might capture carbon," David Beerling, director of Sheffield's Leverhulme Center for Climate Change Mitigation, said in a news release.

Silicate rocks, like basalt, are rocks left over from ancient volcanic eruptions. When introduced to cropland, they dissolve in the soil. The dissolution sets off a chemical reaction that helps capture and store CO2 in the soil. The reaction also releases nutrients that aid crop growth.

Unlike other CO2-capturing methods, rock additives don't require shifts in land use and an increase in water use. Plus, many farmers already regularly apply limestone to growing soil to reduce acidification.

"Our proposal is that changing the type of rock, and increasing the application rate, would do the same job as applying crushed limestone but help capture CO2 from the atmosphere, storing it in soils and eventually the oceans," said Stephen Long, a professor at the University of Illinois Champaign-Urbana.

To prevent catastrophic global warming, scientists say humans must find a variety of ways to both reduce CO2 emissions and pull more carbon dioxide from the atmosphere.

"Strategies for taking CO2 out of the atmosphere are now on the research agenda and we need realistic assessment of these strategies, what they might be able to deliver, and what the challenges are," said James Hansen from the Earth Institute at Columbia University.

DOE audit of Obama Administration CCS grants show they paid for booze, spas, limos

Christa Marshall, E&E News reporter

Greenwire: Tuesday, February 13, 2018

Black and white photo of a smoke stack. Photo credit: Señor Codo/Flickr

The Energy Department’s inspector general found the agency messed up in its management of the Texas Clean Energy Project. Señor Codo/Flickr

The Department of Energy allowed millions of dollars slated for a Texas carbon capture project to be used improperly for lobbying, alcohol, travel and "social" expenses, the agency's inspector general said in an audit released today.

The watchdog found that the Office of Fossil Energy approved $38 million to be distributed to the Texas Clean Energy Project, which was never built, without adequate documentation to ensure the money was used for intended purposes.

In one instance, more than $600,000 in consulting fees related to the project may have been charged for things such as spa services, limousines and first-class trips, auditors said.

"The issues identified occurred, in part, because Fossil Energy had not always exercised sound project and financial management practices in its oversight," the report said.

"We believe that Fossil Energy should thoroughly evaluate and address the issues and apply lessons learned to other similar projects," auditors wrote.

The Texas Clean Energy Project was one of several big carbon capture initiatives for which DOE allocated funds following President Obama's 2009 stimulus package. It envisioned a commercial coal plant near Odessa, Texas, capturing and storing more than 90 percent of its CO2 emissions.

In 2010, DOE's Fossil Energy Office awarded TCEP a cooperative agreement under the "Clean Coal Power Initiative," with DOE expected to cover $450 million. The National Energy Technology Laboratory (NETL) helped implement the program.

After years of challenges in obtaining financing and missing deadlines, Fossil Energy initiated actions to terminate the project two years ago (E&E News PM, May 26, 2016). Last year, Summit Texas Clean Energy LLC, TCEP's developer, filed for bankruptcy protection, and the venture never broke ground.

The audit reported that Summit and associates used more than $1.3 million for "questionable" or prohibited purposes, including catering on a private jet and banquet room rentals.

The IG said invoices with spa, limousine and alcohol expenses were particularly concerning because they included Summit employees traveling with one consulting firm, increasing the risk of double billing.

Another $1.2 million may have been used illegally for prohibited lobbying services. In one instance, Summit may have overstated the value of a cost share with a land purchase by $384,000.

The IG said problems arose in part because Fossil Energy didn't identify improper lobbying activities and approved funds without complete paperwork.

In one example, invoices for $16.9 million in subcontractor costs did not include details about the nature of services provided or the number of hours worked.

The audit notes that the company disputed that funds were used improperly for lobbying. Project developers said the questioned billing was to obtain clarification on an IRS rule, not to change legislation.

In a statement, Summit Power Group said it disagreed with the findings and was disappointed that the report was issued without any advance opportunity for review.

"As a condition of receiving the award, Summit put in place rigorous financial oversight controls and as the report notes, the project underwent annual external audits that found no significant findings or questionable costs," Summit said.

Because of those external audits, the DOE IG said it was not questioning many of the costs. "However, these audits were not typically conducted until well after Summit had been reimbursed," the watchdog said.

Overall, the Office of Fossil Energy said it agreed with the IG's recommendations and would take "corrective action."

In an accompanying letter, Assistant Secretary for Fossil Energy Steven Winberg said, "The facts and circumstances detailed in the report support the decision to discontinue this high-profile, major demonstration project."

Several actions would be implemented immediately at the fossil office, including requiring detailed documentation to be submitted for reimbursements, reviewing cost share procedures at NETL and having a contracting officer review costs specifically related to Summit, he said.

Kurt Waltzer, managing director at Clean Air Task Force, said in an email the report was "incredibly frustrating" because the U.S. underinvests in low-carbon technology.

"We need to keep some DOE grant-making ability in the clean energy toolkit for early stage efforts — and DOE clearly needs to reform its oversight, but the most important way to avoid these problems with big commercial projects is performance-based incentives like (tax credits) — so people get paid for what they do," he said.

Last week, Congress approved expanded tax credits for carbon storage projects that advocates had been pushing for for almost a decade (Greenwire, Feb. 9).

Clothing Industry Set to Consume a Quarter of the Global Carbon Supply by 2050

What would it take to make the fashion industry truly sustainable?

By Michelle ChenTwitter YESTERDAY 3:04 PM

The hot trend on today’s catwalk is “sustainable fashion,” with big names like H&M and Stella McCartney hailing a new wave of socially conscious apparel. But in an industry based on bottomless consumption, ever-cheaper prices, and ever-declining labor and environmental standards, fast fashion and earth-friendly just don’t seem to match.

However, some designers are seeking to refashion our clothes with a green conscience under the label of the “circular economy.” A purportedly less exploitative production system, a circular economy is an integrated system that constantly recirculates and renews materials, through recycling, reuse, resale, or reduced consumption, to ensure minimum waste and exploitation.

The Ellen MacArthur Foundation (EMF), an environmental philanthropy, is campaigning to reshape the fashion industry with a circular-textiles initiative, and has teamed up with some of the biggest brands in fashion to help overhaul the business model. The hope is to reverse the hyper-consumption that is currently promoted in fashion—an industry on track to consume a quarter of the global carbon supply by 2050—and to recast the whole clothing supply chain into a system based on balanced consumption and less-intensive production methods.

A “circular” clothing industry, in theory, would be elegantly balanced: The fashion business model would be reoriented toward reducing consumption and waste at every step, from the cotton field to the storefront window. On the production end, reducing chemical-intensive synthetic fibers would sharply cut pollution. Manufacturers would systematically decrease the pace and intensity of production, so that a company’s energy consumption would automatically shrink to fit the reduced resource needs for fewer garments and less overseas exporting. As the carbon footprint downsizes in production, circularity would be encouraged in the retail market as well by designing more durable styles, which could be worn for years, rather than become disposable within a few months. EMF also recommends creating a second shelf life for used clothing by expanding the marketing of resold and rented apparel.

An accompanying industry analysis by EMF finds that, eventually, the value derived from more eco-friendly production and retail would offset the impacts of waste generated by the fashion industry. By halting or reversing the cycle of environmental degradation, the industry could provide an avenue toward shrinking consumers’ carbon footprints.

There are some circular economy skeptics, however: Even assuming that “sustainable” textile and apparel manufacturing can become less energy-intensive and “cleaner” through improved technology, critics warn that such material changes may fail to grapple with the problem at the root of our capitalist economy: How can society derive limitless profits from a planet with finite resources?

The underlying crisis lies not just in the technical quality, but in the overall quantity of production, circular or not. According to Maddy Cobbing of the advocacy group Greenpeace’s Detox My Fashion campaign, “the current rates of excessive production and consumption in the industry as a whole are probably outweighing any gains that are being made on eliminating hazardous chemicals.” So comprehensively scaling down production remains the safest way to decrease environmental impacts, and, in the immediate term at least, “the industry needs to take a more responsible approach and slow down the flow of materials as the first priority.”

The circular-economy conversation currently remains tethered to an assumption of never-ending growth. But according to Australia-based activist Sharon Ede, who has analyzed circular economics with the Post-Growth Institute, “Business opportunity is one thing…but if we have business that depends on ever more growth and material throughput to survive…if we do not challenge how commerce operates at its heart…then all our recycling, resource efficiency, ethical labour and other practices will not be enough.”

“It’s not just about materials,” Ede argues. “It’s about the wider context in which activity occurs. What kinds of things would have to change in our culture, economics, [and] society related to how we attribute value and meaning to clothing, how it is made, the conditions in which it is made and where, by whom, and creating what impacts?”

EMF imagines that a circular economy “would be distributive by design, meaning value is circulated among enterprises of all sizes in the industry so that all parts of the value chain can pay workers well and provide them with good working conditions.” And some sustainability economists argue that low-skill factory jobs might be replaced eventually with higher-skill jobs in the “green economy.” But it is likely that many low-wage garment workers, who are often young women or migrants with few other job options, will face immediate displacement in a downsized clothing manufacturing industry.

So how serious is the fashion world about investing in what the industry has taken to calling “fair burden sharing” for the most vulnerable workers? Regardless of the efficiency and eco-friendliness of factories, concrete labor protections are needed to ensure that workers can benefit from increased environmental sustainability. The industry’s current ethical sourcing codes are opaque, inconsistent and corporate-controlled through voluntary self-auditing programs. Often self-monitoring systems serve more to greenwash a brand’s image than to actually clean up their supply chain.

To make jobs both cleaner and fairer on a global scale, there must be full traceability of materials in the supply chain to stamp out the use of labor trafficking, and uphold standards for sustainable materials. This has proved tricky in many industries, from jewelry to electronics, so the clothing industry would be a pioneer if it were able to fully track materials from cotton plant to High Street shelf.

Given the risks for vulnerable workers under the industry’s current production systems, Greenpeace argues, “Better working conditions and opportunities for rewarding work are likely to follow as a result of less exploitation of nature and people.” But job loss could be unavoidable, which will require a new type of fashion industry with more balanced workplace governance, and designers cooperating with workers on design and planning. Only if workers are protected from the worst employment impacts of reduced production could they ever really benefit from the environmental dividends of a more environmentally healthy, less toxic system of production.

The transition could begin with more oversight over business practices and setting labor protections and compensation standards that guarantee workers’ rights and equity. Cheap assembly-line jobs could be converted to less energy-intensive, technology-driven manufacturing work, not in mega-factories but in smaller-scale, cooperative environments. Such a process is “circular,” but it’s also humane, and recognizes the need for sustainable jobs as well as sustainable products, to truly “close the loop” of production—by returning wealth to communities while recycling the resources that form the fruit both of their loom and their labor.

A circularity movement requires circular thinking not just within corporations, but across society: exchanging dialogue with impacted communities in workplaces, farms, or unions—and engaging people who understand that grassroots circularity means consuming consciously, in balance with nature, and reinvesting in holistic community development. For the glamorous designers rethinking their brands, making a circular economy work means sharing that lofty vision with those whose lives most depend on it

Florida Keys to raise roads before climate change puts them underwater. It won’t be cheap

By Alex Harris, Miami Herald

Published: February 4, 2018, Updated: February 4, 2018 at 08:14 PM

In a small Key Largo neighborhood, the tide came in — and didn’t go out for almost a month.

Residents sloshed through more than a foot of saltwater that lapped at their front yards, knocked over their trash cans, created a mosquito breeding ground and made their roads nearly impassable. Some residents rented SUVs to protect their own cars. Others were homebound.

That was the fall of 2015, courtesy of freak weather and high tides. Neighbors have clamored for solutions since, and Monroe County has finally pitched a potential fix. Officials want to elevate the lowest, most flood-prone road in the Twin Lakes Community of Key Largo and in the low-lying Sands neighborhood of Big Pine Key, 70 miles south — and 2018 might be the year it happens. The county will start accepting design proposals in the coming weeks, and money for construction could be available in October.

It’s a small but significant project — it will be the first road project in the Keys specifically designed for adaptation to future sea level rise, a clear and present problem for the famous chain of islands. The county has already spent $10 million on road projects that include elevation, and plans to spend at least $7 million more in the near future. But these are the first to include collecting, pumping and treating the stormwater that runs off the newly raised road.

These small stretches of road are test cases for the county. Monroe hopes to use lessons learned here on the rest of the roads that climate change will swamp in the years to come. Out of 300 total miles of county roads, half are susceptible to sea level rise in the next 20 years, said Rhonda Haag, the county’s sustainability program manager.

It won’t be cheap. Early estimates show that raising just one-third of a mile of road above sea level could cost a million dollars in Key Largo and more than $2.5 million in Big Pine.

For comparison, the county spent $3.3 million repairing about two miles of road in Lake Surprise Estates in Key Largo. That was a less ambitious fix that rebuilt some parts, elevated others and included more limited drainage additions.

This new project is nothing like the county has done before, said Haag. It’s "like comparing oranges to apples."

Elevating is pricier than repairing a regular chunk of road because the process entails much more than just pouring extra asphalt on top. All that water from incoming tides has to go somewhere, and handling it requires building new structures like pumps and pipes.

"In the Keys we don’t have drainage infrastructure," said Haag. "Basically it’s blacktop on the road."

The unique geography of the Keys plays a big part in why drainage solutions common in other areas won’t work on the South Florida islands. Underneath the dirt on each island is porous limestone rock. Sometimes, when water levels get high, that sponge-like rock is filled with groundwater. It can degrade the materials used to form the base of the roads and crack asphalt.

High groundwater also means engineers can’t count on the ground to absorb runoff, so they have to turn to pumps to send the water elsewhere, as Miami Beach does. The city has spent more than $125 million on a drainage system and elevated roads.

Unlike Miami Beach, Monroe wants to clean the water they’re pumping back into the ocean. Miami Beach filters the water for large objects such as plastic bottles, but studies have shown the pumps pick up fecal matter from the roads and wash the pollution straight into the bay.

"You cannot do that," Haag said. "We absolutely, positively have to treat the stormwater before it’s released."

Treatment is pricey, and building the necessary pipes and treatment plant will likely make up a significant amount of the project’s costs. Pumps also require backup generators in case of emergencies, which require extra land, something the current cost estimate doesn’t factor in.

When the county first considered raising the roads, the plan was to use gas taxes and road repair money. Ironically, after Hurricane Irma decimated a large swath of the lower and middle Keys (including the project area in Big Pine) the county now has access to FEMA hazard mitigation grants. Haag said Monroe County is considering applying.

An estimate shows raising less than a mile of road could cost the county upwards of $3.5 million. In Big Pine, that chunk of road is going up a foot. In Key Largo, it’s being raised six inches. At best, that’s a minimalist approach. Those elevations are just an inch above what researchers say is necessary for both roads to be above sea level for the next 25 years.

To elevate any higher, "Well, that’s expensive," Haag said.

Bringing the Key Largo neighborhood up to a foot above sea level would double the length of road included and quadruple the price. In Big Pine, elevating to 18 inches above sea level would require spending almost $9 million on 1.29 miles of road.

Still, researchers say the fixes would buy some time for the low-lying islands. The roads are expected to be higher than sea level until at least 2040. By then, scientists predict climate change will raise the sea level 7 to 18 inches, which would leave the roads — if left as they are — underwater.

Even with these millions of dollars in reconstruction, there isn’t a guarantee that these roads will stay dry. Haag said the new roads should still see an average of seven days of flooding a year, depending on which sea level rise prediction comes true.

"To keep everyone dry all the time dramatically raised the price," she said. "You’d have to go much higher."

How Bill Gates aims to clean up the planet

It’s a simple idea: strip CO2 from the air and use it to produce carbon-neutral fuel. But can it work on an industrial scale?

John Vidal

Sun 4 Feb 2018 04.00 EST

t’s nothing much to look at, but the tangle of pipes, pumps, tanks, reactors, chimneys and ducts on a messy industrial estate outside the logging town of Squamish in western Canada could just provide the fix to stop the world tipping into runaway climate change and substitute dwindling supplies of conventional fuel.

It could also make Harvard superstar physicist David Keith, Microsoft co-founder Bill Gates and oil sands magnate Norman Murray Edwards more money than they could ever dream of.

The idea is grandiose yet simple: decarbonise the global economy by extracting global-warming carbon dioxide (CO2) straight from the air, using arrays of giant fans and patented chemical whizzery; and then use the gas to make clean, carbon-neutral synthetic diesel and petrol to drive the world’s ships, planes and trucks.

The hope is that the combination of direct air capture (DAC), water electrolysis and fuels synthesis used to produce liquid hydrocarbon fuels can be made to work at a global scale, for little more than it costs to extract and sell fossil fuel today. This would revolutionise the world’s transport industry, which emits nearly one-third of total climate-changing emissions. It would be the equivalent of mechanising photosynthesis.

The individual technologies may not be new, but their combination at an industrial scale would be groundbreaking. Carbon Engineering, the company set up in 2009 by leading geoengineer Keith, with money from Gates and Murray, has constructed a prototype plant, installed large fans, and has been extracting around one tonne of pure CO2 every day for a year. At present it is released back into the air.

But Carbon Engineering (CE) has just passed another milestone. Working with California energy company Greyrock, it has now begun directly synthesising a mixture of petrol and diesel, using only CO2 captured from the air and hydrogen split from water with clean electricity – a process they call Air to Fuels (A2F).

“A2F is a potentially game-changing technology, which if successfully scaled up will allow us to harness cheap, intermittent renewable electricity to drive synthesis of liquid fuels that are compatible with modern infrastructure and engines,” says Geoff Holmes of CE. “This offers an alternative to biofuels and a complement to electric vehicles in the effort to displace fossil fuels from transportation.”

Synthetic fuels have been made from CO2 and H2 before, on a small scale. “But,” Holmes adds, “we think our pilot plant is the first instance of Air to Fuels where all the equipment has large-scale industrial precedent, and thus gives real indication of commercial performance and viability, and leads directly to scale-up and deployment.”

The next step is to raise the money, scale up and then commercialise the process using low-carbon electricity like solar PV (photovoltaics). Company publicity envisages massive walls of extractor fans sited outside cities and on non-agricultural land, supplying CO2 for fuel synthesis, and eventually for direct sequestration.

“A2F is the future,” says Holmes, “because it needs 100 times less land and water than biofuels, and can be scaled up and sited anywhere. But for it to work, it will have to reduce costs to little more than it costs to extract oil today, and – even trickier – persuade countries to set a global carbon price.”

Meanwhile, 4,500 miles away, in a large blue shed on a small industrial estate in the South Yorkshire coalfield outside Sheffield, the UK Carbon Capture and Storage Research Centre (UKCCSRC) is experimenting with other ways to produce negative emissions.

Critics say these technologies are unfeasible. Not producing the emissions in the first place would be much cleverer

The UKCCSRC is what remains of Britain’s official foray into carbon capture and storage (CCS), which David Cameron had backed strongly until 2015. £1bn was ringfenced for a competition between large companies to extract CO2 from coal and gas plants and then store it, possibly in old North Sea gas wells. But the plan unravelled as austerity bit, and the UK’s only running CCS pilot plant, at Ferrybridge power station, was abandoned.

The Sheffield laboratory is funded by £2.7m of government money and run by Sheffield University. It is researching different fuels, temperatures, solvents and heating speeds to best capture the CO2 for the next generation of CCS plants, and is capturing 50 tonnes of CO2 a year. And because Britain is phasing out coal power stations, the focus is on achieving negative emissions by removing and storing CO2 emitted from biomass plants, which burn pulverised wood. As the wood has already absorbed carbon while it grows, it is more or less carbon-neutral when burned. If linked to a carbon capture plant, it theoretically removes carbon from the atmosphere.

Known as Beccs (bioenergy with carbon capture and storage), this negative emissions technology is seen as vital if the UK is to meet its long-term climate target of an 80% cut in emissions at 1990 levels by 2050, according to UKCCSRC director Professor Jon Gibbins. The plan, he says, is to capture emissions from clusters of major industries, such as refineries and steelworks in places like Teesside, to reduce the costs of transporting and storing it underground.

“Direct air capture is no substitute for using conventional CCS,” says Gibbins. “Cutting emissions from existing sources at the scale of millions of tonnes a year, to stop the CO2 getting into the air in the first place, is the first priority.

CO2 solidified into carbonate minerals after being injected into basalt formations at Hellisheiði geothermal power plant in Iceland.

“The best use for all negative emission technologies is to offset emissions that are happening now – paid for by the emitters, or by the fossil fuel suppliers. We need to get to net zero emissions before the sustainable CO2 emissions are used up. This is estimated at around 1,000bn tonnes, or around 20-30 years of global emissions based on current trends,” he says. “Having to go to net negative emissions is obviously unfair and might well prove an unfeasible burden for a future global society already burdened by climate change.”

The challenge is daunting. Worldwide manmade emissions must be brought to “net zero” no later than 2090, says the UN’s climate body, the Intergovernmental Panel on Climate Change (IPCC). That means balancing the amount of carbon released by humans with an equivalent amount sequestered or offset, or buying enough carbon credits to make up the difference.

But that will not be enough. To avoid runaway climate change, emissions must then become “net negative”, with more carbon being removed than emitted. Many countries, including the UK, assume that negative emissions will be deployed at a large scale. But only a handful of CCS and pilot negative-emission plants are running anywhere in the world, and debate still rages over which, if any, technologies should be employed. (A prize of $25m put up by Richard Branson in 2007 to challenge innovators to find a commercially viable way to remove at least 1bn tonnes of atmospheric CO2 a year for 10 years, and keep it out, has still not been claimed – possibly because the public is uncertain about geoengineering.)

The achilles heel of all negative emission technologies is cost. Government policy units assume that they will become economically viable, but the best hope of Carbon Engineering and other direct air extraction companies is to get the price down to $100 a tonne from the current $600. Even then, to remove just 1% of global emissions would cost around $400bn a year, and would need to be continued for ever. Storing the CO2 permanently would cost extra.

Critics say that these technologies are unfeasible. Not using the fossil fuel and not producing the emissions in the first place would be much cleverer than having to find end-of-pipe solutions, say Professor Kevin Anderson, deputy director of the Tyndall Centre for Climate Change Research, and Glen Peters, research director at the Centre for International Climate Research (Cicero) in Norway.

In a recent article in the journal Science, the two climate scientists said they were not opposed to research on negative emission technologies, but thought the world should proceed on the premise that they will not work at scale. Not to do so, they said, would be a “moral hazard par excellence”.

Australian firm unveils plan to convert carbon emissions into 'green' concrete

Instead, governments are relying on these technologies to remove hundreds of millions of tonnes of carbon from the atmosphere. “It is breathtaking,” says Anderson. “By the middle of the century, many of the models assume as much removal of CO2 from the atmosphere by negative emission technologies as is absorbed naturally today by all of the world’s oceans and plants combined. They are not an insurance policy; they are a high-risk gamble with tomorrow’s generations, particularly those living in poor and climatically vulnerable communities, set to pay the price if our high-stakes bet fails to deliver as promised.” According to Anderson, “The beguiling appeal of relying on future negative emission technologies is that they delay the need for stringent and politically challenging policies today – they pass the buck for reducing carbon on to future generations. But if these Dr Strangelove technologies fail to deliver at the planetary scale envisaged, our own children will be forced to endure the consequences of rapidly rising temperatures and a highly unstable climate.”

Kris Milkowski, business development manager at the UKCCSRC, says: “Negative emissions technology is unavoidable and here to stay. We are simply not moving [to cut emissions] fast enough. If we had an endless pile of money, we could potentially go totally renewable energy. But that transition cannot happen overnight. This, I fear, is the only large-scale solution.”

How Trump has already scored a win on infrastructure

The cuts passed last year will allow utilities to tap money set aside in deferred taxes, and will set off a scramble over how to spend it.

By DARIUS DIXON for Politico, 02/06/2018 08:05 PM EST

Electric and gas utilities are finding themselves with vast amounts of excess cash as a side effect of last year's tax code rewrite.

The Republican tax cut could spark the multibillion-dollar infrastructure program that almost nobody expected.

Electric and gas utilities are finding themselves with vast amounts of excess cash as a side effect of last year's tax code rewrite — money that could easily total tens of billions of dollars, based on initial corporate filings. Those funds could become available for a massive buildout of energy infrastructure, for projects such as modernizing the electric grid, installing pipelines or putting up wind farms.

The cash surge could exceed anything that comes out of President Donald Trump’s still-unreleased $1.5 trillion infrastructure plan if that proposal runs aground in Congress. The debate about how to spend the utility windfall will take place in dozens of states and is already crossing ideological divides — environmental groups, for example, are discussing whether some money should go to uses like retraining displaced coal workers.

The cash would come from "deferred" money that the utilities set aside to pay future years' taxes. After Congress slashed the corporate tax rate, many utilities are finding they’ve set aside too much. And in the regulated environment most power companies operate in, excess funds need to go to uses that benefit their customers.

"This deferred income tax pool is big money," said David Springe, executive director of the National Association of State Utility Consumer Advocates, a trade association. "This is where all the money is. That is why the utilities really care."

By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time.

One utility owner, New Jersey-based PSEG, estimates it will have at least $1.8 billion in its deferred tax pool to spend on rate cuts or infrastructure upgrades. American Electric Power's regulatory filings list $4.4 billion, while NextEra Energy's sits at $4.5 billion. Virginia’s Dominion Energy has $988 million. Others will start rolling out their "excess" figures over the next several weeks as they file annual financial reports to the Securities and Exchange Commission.

Deferred tax balances across the power industry alone total around $165 billion, according to the Edison Electric Institute, a trade group of investor-owned power utilities — although the portion that would be considered excess under the new tax rate is unknown.

Instead of letting utilities spend the excess portion, state utility commissions could tell the companies to return it to their customers in the form of reduced rates. But power and gas suppliers are likely to argue that at least some money should go to projects that will benefit consumers, especially since the utilities have already collected the money.

“Some commissions may say, 'We want to give it all back and we're going to do it over 10 years,'" said Casey Herman, who leads PricewaterhouseCoopers' U.S. power and utilities advisers unit. "Or they may say that this is a unique opportunity for us to implement something we wanted to implement but it wasn't affordable to our customers before."

Because no new burden would be placed on ratepayers, it may be easy for utilities to make the case for spending it.

"I expect that a lot of utilities will argue that the tax savings will be an opportunity to invest in infrastructure without having to come back and ask for rate increases to recover those costs," said Greg White, a former Michigan energy regulator who is executive director of the National Association of Regulatory Utility Commissioners.

The utilities accrued the large sums because they generally don't pay off a transmission line, power plant or other expensive, long-term project in a single year, in order to avoid rate shocks on customers. Instead, they work with state regulators to spread out those costs — and the federal taxes — over decades. And because consumers are already on the hook for those decades-long payment schedules, the tax bill's 40 percent drop in corporate tax rates means utilities are on track for a sizable over-collection.

In some states, early signs point to efforts to direct that money toward investments ranging from upgrading the electric grid to retraining workers.

"It's up to the utilities commissions to decide what to do, and the default position is to just reduce the rates," said John Finnigan, a senior regulatory attorney with the Environmental Defense Fund, but there's no reason the money couldn't be invested in the grid, energy efficiency or renewable energy.

The International Energy Agency has estimated that the U.S. power grid would need $2.1 trillion in new investments between 2014 and 2035 to accommodate the transition to newer energy sources, such as wind or solar.

Most utilities keep a to-do list of projects, said Eric Grey, the Edison Electric Institute's director of government relations, so the money freed up because of the tax change is "going to fuel our member companies to move forward with those advancements in infrastructure."

Two factors have really driven a spike in deferred tax balances over the past 15 years: a dramatic increase in capital spending on infrastructure, and the generous bonus depreciation incentives put in place after the 2008 recession that gave companies bigger write-offs for equipment purchases.

"Since 2004, we've almost been in this super-cycle as far as a build cycle that has tripled [capital expenditures] to somewhat north of $100 billion," Grey said. "At the end of the day, it's our customers' money. It's our due diligence to make sure that that goes back to them whether that's in rates or in infrastructure that is needed by them."

Low-profile state utility commissions are the bodies that will ultimately navigate the currents of local politics, long-term energy planning, business concerns and their own legal boundaries on what to do with the deferred tax windfalls.

For big utilities, the issue gets complicated by their spread across multiple jurisdictions. Ohio-based AEP, for example, has to sort out how its $4.4 billion return plays out in the 11 states it operates in.

Even state-backed consumer advocate offices, which tend to negotiate for less spending, aren't unified in pushing to translate every excess dollar into rate deductions.

"My membership probably leans more towards drawing that money back," said Springe, from the utility consumer advocates' association. But he said that "every state is going to be different, and in every one of those states the consumer advocate may or may not have the same view as the utilities."

Still, the significant sums in play present an enticing opportunity for utilities and regulators.

"I struggle to think that a state would simply take all of that money and use it for some particular project — then you're not getting any rate reductions,” Springe said. “I think it’s equally likely that states will take some of that money and use it on projects.”

EDF's Finnigan said his group will advocate for particular projects at state commissions once utilities start saying how much money is at stake. But as this unfolds in every state over the next year or so, he said, his group plans to focus on nine states, particularly those with the largest greenhouse gas emissions and energy consumption, such as California, Texas, Ohio and New York.

Even critics of Trump's energy and environmental policies see the deferred tax money as a silver lining of the GOP law. The Sierra Club is pondering a strategy for using the excess tax funds to retrain coal power plant workers at sites slated for closure in rural areas.

"The Trump administration is not offering a coherent policy for economic transition for coal communities. Instead, it's giving them false promises that coal's getting revived," said Bill Corcoran, the Western director of the group's Beyond Coal campaign. "In that gap ... or even an acceptance that the transition is happening, we should strive to make use of this tax bill."

Charging utility consumers to retrain power plant workers facing job losses isn't without precedent. Last month, California regulators approved a plan to close the state's last two nuclear reactors by 2025. But the deal included $223 million for a retention and retraining program for plant employees who are being forced to move on. Corcoran also pointed to the pending closure of a coal-fired power plant in Washington state where the owner, TransAlta, agreed to pour $55 million into a local economic development fund.

"It's important to have robust models for that transition," Corcoran said, noting coming coal plant closures in New Mexico, Colorado and Montana.

"We want to make sure that in the understandable rush to return money to ratepayers that important community questions aren't missed in the process," he said. "This moment is a prod to have that conversation."

Farm sunshine, not cancer: Replacing tobacco fields with solar arrays

Michigan Technological University

Although tobacco use is the leading cause of avoidable death globally, farming tobacco continues to provide the primary source of income to many farmers. But two Michigan Technological University researchers contend that converting tobacco fields to solar farms could profitably serve two purposes: Reduce preventable deaths and meet the growing need for solar energy to combat climate change.

Ram Krishnan, now an engineer designing large solar systems in the rapidly expanding U.S. solar industry, and Joshua Pearce, professor of materials science and electrical engineering, completed a study "Economic Impact of Substituting Solar Photovoltaic Electric Production for Tobacco Farming" to be published in Land Use Policy.

As demand for solar energy grows so does the demand for land for solar farms. "To completely eliminate the need for burning fossil fuels, solar technology requires large surface areas," Pearce explains.

However, as demonstrated by the conversion of cropland to energy for ethanol production, removing arable land from food production can cause a rise in global food prices and food shortages. Targeting land that grows crops with known health hazards for solar energy production removes a detrimental consequence from the equation, the researchers say, and the potential to convert tobacco fields to solar arrays could provide a tantalizing opportunity for farmers to increase their profits thousands of dollars per acre per year by transitioning from tobacco to solar.

Krishnan and Pearce selected North Carolina for their case study because it is a major tobacco-producing state with large swaths of land and high solar potential.

"Previous, more modest attempts to offset fossil fuels with biofuels required so much land that food crops were offset, raising food prices and increasing hunger throughout the world. We were looking for large areas of land that could be used for solar power that would not increase world hunger."

Tobacco crop lands provided an interesting opportunity because tobacco use in America is declining and has well-documented bad impacts on human health. Pearce notes that tobacco continues to be farmed in the U.S. today because farmers can make money doing it.

"We were interested in what conditions were needed to enable tobacco farmers to begin installing solar energy systems on the same land," he says. "We looked at likely trends in all of the major economic factors, but were surprised to find that because the cost of solar has dropped so dramatically it is already economically advantageous for tobacco farmers to replace tobacco with solar in many situations."

Additionally, unlike plants, solar modules can withstand extreme heat, cold, ice, snow, hail, torrential rain, droughts and other increasingly unstable climate conditions. They are rated to withstand winds upwards of 150 miles per hour.

Krishnan and Pearce conducted a sensitivity analysis on the economic factors of the installed solar farms and their effects on profit. They compared solar profit and the profit available for simply farming tobacco per acre per year.

They used conservative positive assumptions on tobacco crop yield and price, noting there could also be a decrease in demand for tobacco as fewer people take up smoking and existing smokers die. They then looked at how much the price of electricity could increase each year based on a range of past increases (called the escalation rate, which ranged from 0.3 percent in 2010 value to 5.7 percent in 2008 value). As electricity becomes more valuable because of a higher escalation rate, solar energy produced in the future becomes more valuable. Even with relatively modest escalation rates, solar electricity provides tremendous profits of thousands of dollars to tens of thousands of dollars more per acre per year for land owners.

Escalation rates and the associated value of electricity in the future are presented in the paper over a wide range. Similarly, calculations for the costs of the solar installation varied from two dollars per watt (expensive in today's market in which solar farms have come in at one dollar per watt) to realistic potential reduced costs in the future of 80 cents per watt. The sensitivity analysis presented in the paper allows farmers to make their own educated guesses on inputs based on what their local utility's electric rates and what price the farmers are quoted for a photovoltaic farm on their land, then make the decision that makes the most sense for them. To determine whether a solar photovoltaic farm would work on their land, tobacco farmers should calculate the LCOE (levelized cost of electricity) for their solar farm and compare it to the price of electricity in their particular location, rate structure, and load, as well as economic factors. More details are provided in the paper on these calculations.

Providing Energy, Saving Lives

If every tobacco farm in North Carolina converted to solar energy production, there is the potential to generate 30 gigawatts, which is equivalent to the state's peak summer load. In the long run, tobacco farmers stand to make more money farming solar rays for energy instead of growing a component of cigarettes.

The primary reason holding back tobacco farmers is the capital cost of the solar system. Currently, a 10-megawatt solar farm priced at $1 per watt to install-$10 million. To help farmers make such a conversion, Pearce argues, governments in tobacco states should begin exploring policies to ease the transition. Many tobacco farmers would need to rely on investors to deploy solar. Local governments could also aid the transition by making policies to help provide access to the needed capital to help land owners transition.

In addition to the economic and environmental benefits, the researchers estimate conversion of North Carolina's tobacco fields would save 2,000 American lives per year from pollution reduction alone by offsetting coal-powered electricity.

Based on numbers from the Centers for Disease Control, if all tobacco use is eliminated by replacing tobacco farms in the U.S. with solar farms, more than 480,000 American deaths per year from cigarette smoking are directly saved. In addition, 42,000 deaths resulting from Americans suffering the effects secondhand smoke would also be saved per year.

All together this represents a total of over half a million premature deaths prevented in the U.S. directly every year.

"The economic benefits for ex-tobacco farmers going into solar is nice," Pearce concludes, "but the real payoff is in American lives saved from both pollution prevention and smoking cessation."

The big burn

Geologist James Kennett and colleagues provide evidence for a massive biomass burning event at the Younger Dryas Boundary

University of California - Santa Barbara

Some 13,000 years ago, a cataclysmic event occurred on Earth that was likely responsible for the collapse of the Clovis people and the extinction of megafauna such as mammoths and mastodons.

That juncture in the planet's geologic history -- marked by a distinct layer called the Younger Dryas Boundary -- features many anomalies that support the theory of a cometary cloud impacting Earth. The collision triggered a massive biomass burning event, and the resulting soot, ash and dust in the global atmosphere blocked out the sun, which prevented photosynthesis -- a phenomenon called impact winter.

For more than a decade, UC Santa Barbara professor emeritus James Kennett has studied elements found at the Younger Dryas Boundary (YDB). He has collaborated with scientists around the globe, providing evidence at the YDB for a platinum peak as well as for spherules, melt glass, nanodiamonds and other exotic materials that can be explained only by cosmic impact.

Kennett and his colleagues have now published new research in the Journal of Geology. In two papers, they analyze existing published scientific data from ice, glacier, lake, marine and terrestrial sediment cores, finding evidence for an extensive biomass burning episode at the YDB layer representing one the most extreme events -- if not the most extreme -- ever experienced by our own species, anatomically modern humans. Recent extreme climate and burn events like those in California pale by comparison, Kennett said.

The group's theory posits that a cometary cloud -- a single broken-up comet broader than Earth's diameter -- entered Earth's atmosphere, causing impacts and aerial explosions that sparked fires around the globe. Co-author William Napier, a British astrophysicist and leading expert on cometary impacts, contributed an updated section on impact theory in one of the two papers featured in the journal.

"The ice cores are the most persuasive because they are so well dated," explained Kennett, a professor emeritus in UCSB's Department of Earth Science. "What's more, they provide sound geochemical results that point to a large biomass burning event precisely coinciding with the YDB layer formed when this major comet impacted Earth."

The investigators studied byproducts of biomass burning and found a peak in ammonium. They also found other peaks in combustion aerosols such as nitrate, acetate, oxalate and formate. According to Kennett, collectively these elements reflect the largest biomass burning episode in the past 120,000 years of the Greenland ice sheet.

The scientists also examined the record of atmospheric carbon dioxide entrained in Antarctic ice, which also shows an increase in CO2 at the YDB. "With extensive biomass burning, you'd expect an increase in CO2," Kennett explained. "We used the CO2 data to estimate that about 10 percent of the Earth's terrestrial biomass burned during this event." Independent calculations of soot concentrations performed by lead author Wendy Wolbach, a professor of chemistry at DePaul University, and Adrian Melott, professor emeritus at the University of Kansas, confirmed that estimate, which equals approximately 10 million square kilometers -- a phenomenal area to burn in just a few days to weeks.

The primary biomass burning proxy recorded in lake, marine and terrestrial sediment cores is charcoal, which was found at the YDB in 129 lake core records around the globe. "The biomass burning was so extensive and voluminous -- we have evidence of it over North America, South America, Western Europe and the western part of Asia -- that it blocked out the sun, causing an impact winter, with profound effects on life on Earth, particularly large animals and humans," Kennett said. "The impact winter itself was also part of what triggered the Younger Dryas cooling in the Northern Hemisphere."

It could cost Oakland schools $38 million to fix lead contamination

By Ali Tadayon | atadayon@bayareanewsgroup.com | Bay Area News Group

PUBLISHED: February 8, 2018 at 6:22 am | UPDATED: February 9, 2018 at 8:56 am

OAKLAND — Oakland Unified estimates it will cost $38 million to address high lead levels in water taps at its schools.

About $22 million of that estimated cost is for replacing old water lines, and $16 million to replace drinking water and sink fixtures, Oakland Unified spokesman John Sasaki told the Oakland Tribune. Under the district’s current plan, the cost will be spread out over five years, he said.

The district seeks to fund most of the work through local construction bonds and grants through state and other programs, Sasaki said. The district is amid a budget crisis that prompted the school board to approve $9 million in mid-year budget cuts.

Deferred maintenance funds, which come out of the district’s general fund every year, will also be used to pay for the fixes, Sasaki said. Every year, between three and five percent of the district’s general fund is budgeted for deferred maintenance, he said.

“This work would be funded in a way to protect classroom funding, and we are currently looking for funding to help pay for this work elsewhere,” Sasaki said. “So, there is no reason to believe that this work would cause budget reductions for the district.”

Sasaki said the $38 million figure is “a very rough estimate that requires lots of due-diligence to validate.”

Fifteen Oakland Unified schools have been found to have at least one water fixture with lead levels exceeding the federal recommended cap of 15 parts per billion, district officials said at the Jan. 24 school board meeting. Oakland Unified is testing water taps at all its schools as well as some charter schools, and has contracted the East Bay Municipal Utility District to conduct a second round of testing.

The district has already replaced some of the high-lead fixtures, and the others have been taken out of commission until they are replaced.

Advocates have urged the district to go beyond the federal guidelines and replace fixtures that have lead levels exceeding one part per billion. The American Academy of Pediatrics has deemed fixtures with any more than one part per billion of lead to be dangerous for children. Oakland pediatrician Dr. Noemi Spinazzi, at a press conference in November, said lead “mimics iron and calcium, which growing children need.” Children’s bodies can absorb a lot of lead, which stores in the bones, liver, blood and brain. It can lead to anemia, poor growth, fatigue, learning difficulties and even lower IQ levels and developmental delays, she said.

Jason Pfeifle, a health advocate for consumer group CalPIRG, delivered a petition to the school board at its Jan. 24 meeting signed by more than 1,000 people urging the district to adopt a policy that would require every water tap in the school district to be tested and for the district to not allow more than one part per billion of lead in the taps.

“Lead is extremely harmful to children’s health. Even small exposures to lead can do permanent damage to their cognitive development,” Pfeifle said at the meeting.

Congress Extends Tax Breaks for Clean Energy and Carbon Capture

Environment groups worry the carbon capture credits will just boost fossil fuels. The 'tax extenders’, meanwhile, cut some nuclear, solar and geothermal costs.

By Georgina Gustin

Feb 9, 2018

The compromise federal spending bill that Congress passed early Friday includes an array of tax credits for renewable energy, along with a controversial tax break for carbon-capturing technologies that will benefit the fossil fuel industries.

Known as "tax extenders" because they expand or revive temporary benefits that had lapsed, these provisions will provide significant incentives for people and companies to invest in low-carbon forms of energy, ranging from residential installations of solar water heaters and geothermal heat pumps to nuclear power plants.

In some cases, industry groups had lobbied for the provisions for years with little gain while Congress extended solar and wind tax credits during the Obama administration and enacted the Trump administration's recent broad tax break for corporate profits and personal incomes.

This time, the tax extenders made it into a package of spending provisions that will keep the government open for several weeks, while also guaranteeing large increases in military and domestic spending programs for two years, and raising the ceiling on the national debt. The package, which had bipartisan support in the Senate and was endorsed by the White House, overcame opposition from liberal and conservative flanks in the House and settles, at least for now, the main fiscal debates facing Congress.

Some of the tax breaks, like one that restores a tax credit for home geothermal heating and cooling, which can cost tens of thousands of dollars to install, should make a significant difference to individuals trying to lower their carbon footprints. The credit is 30 percent if the system is installed between 2017 and 2019, then drops to 26 percent in 2020 and 22 percent through 2021.

Others, like the carbon capture and sequestration provision, offer complicated and uncertain benefits to costly technologies that might or might not pay off—and that are hotly debated in environmental circles.

Some environmental groups say the carbon capture credits amount to a giveaway to polluters and actually encourage oil production. Others say carbon capture and storage technologies represent a critical solution for reducing carbon dioxide levels in the atmosphere.

Breaks for Biodiesel, but Nothing for Batteries

The bill also offers tax breaks to people for expenditures they made last year, including for vehicles powered by fuel cells and electric motorbikes.

In addition, the legislation extends tax credits to makers of biodiesel and "renewable diesel," and pushes back the deadline for nuclear facilities to qualify for credits. The nuclear provision would benefit the only plant in the works, under construction by Southern Company, in Georgia.

But it disappointed some clean-energy advocates by omitting benefits for battery storage, a growing and crucial element of the expansion of wind and solar power.

Malcolm Woolf, senior vice president for policy at Advanced Energy Economy, a business coalition, said the legislation's tax credit extensions would support investments in companies that together have revenues of $200 billion a year and employ 3 million people.

Two Views on Carbon Capture Credits

Earlier this week, a coalition of environmental groups had asked lawmakers not to expand credits for carbon capture and storage — specifically for enhanced oil recovery, which pumps carbon dioxide from power generators into underground oil fields where it frees up the oil and pushes it toward wells.

Tax credits for carbon capture and storage (CCS) were written into previous legislation, but were limited in ways that cut off financing unless lenders and operators could be certain that the costly, unproven technology would work as intended. Opponents said the restrictions should not be relaxed as much as the industry wanted.

The groups, which included Earthjustice, Friends of the Earth and Greenpeace USA, said the credits, so far "have not delivered measurable progress toward more climate friendly uses of carbon such as permanent sequestration or utilization, and appear to only have been used to increase oil production."

In a blog post Wednesday, before the final bill was released, Elizabeth Noll, legislative director for transportation and energy at the Natural Resources Defense Council, wrote, "Carbon capture can be a useful tool to reduce emissions of dirty power plants and industrial sources, but we need to strengthen accountability, not weaken it, to ensure taxpayer investments are benefiting them."

Noll points out that the oil and gas industries already receive $15 billion in other forms of subsidies each year. The tax overhaul from December gives those industries billions of dollars in additional tax cuts by lowering the corporate income tax rate.

Other green groups have long pushed for more support for carbon-capture technologies and are applauding its inclusion. Senators with widely divergent views of climate policy, including Democrat Sheldon Whitehouse of Rhode Island and Republican Leader Mitch McConnell of Kentucky, favored it.

Bob Perciasepe, president of the Center for Climate and Energy Solutions (C2ES), praised the tax cuts for promoting low-carbon sources of energy, which he called "critically necessary to the goal of reducing greenhouse gas emissions by 80 percent by 2050."

Supporters say the more generous tax credits could provide an essential boost for the industry, much as solar and wind credits did. The National Enhanced Oil Recovery Initiative (NEORI), a coalition of oil, gas, ethanol and environmental groups that worked on the legislation for more than six years, says the extended credits will only apply to projects that demonstrate they can successfully capture and store carbon dioxide.

"This is a climate bill, really," said Stuart Ross of the Clean Air Task Force. "It's really helping CCS, which the IPCC says we have to have because the fossil fuel industry is going to be around for some time."



Archives:

Click here to view our environmental news archive
The Utica Rome Green ExpoTM  charitable event presented by:Energy Users Consulting Services
© all rights reserved