MAKING OUR GREEN ECONOMY COME ALIVE AND THRIVETM

NEWS

ENVIRONMENTAL NEWS

Remember to also click on and check out the "Green Local 175 News" for environmental and green news within a 175 mile radius of Utica-Rome. Organizations and companies can send their press releases to info@greenlocal175.com . Express your viewpoint on a particular issue, by sending an e-mail to our opinion page, "Urge to be Heard". Lastly, for more environmental information listen to our Green Local 175 Radio & Internet Show.

Green Local 175 on ,

Harvey Could Reshape How and Where Americans Build Homes

By Christopher Flavelle, for Bloomberg , August 30, 2017

Storm comes as U.S. flood insurance program is up for renewal

Texas has one of the most relaxed approaches to building codes

Jerry Garcia’s home in Corpus Christi missed the worst of Hurricane Harvey by just a few miles and lost nothing more than some shingles and his backyard pier, which turned up further down Oso Bay. A 5-foot bulkhead and sloping lawn shielded it from the flooding that’s devastated parts of Texas.

A home builder, Garcia said he built this place, like all the houses he builds, "above code" -- stronger than the standards required by law, which in Texas tends to be less than in most states. But he doesn’t think those tougher standards should be imposed on every builder.

"You’ve got to find that medium, to build affordable housing," Garcia, 65, said sitting in his house as Harvey’s rains continued to pound the coast further north. Tougher mandatory codes mean higher costs, which means fewer homes. And if insurers had their way, he added, every home would be "a box with two doors and no windows."

Hurricane Harvey has highlighted a climate debate that had mostly stayed out of public view -- a debate that’s separate from the battle over greenhouse gas emissions, but more consequential to the lives of many Americans. At the core of that fight is whether the U.S. should respond to the growing threat of extreme weather by changing how and, even where, homes are built.

That debate pits insurers, who favor tighter building codes and fewer homes in vulnerable locations, against homebuilders and developers, who want to keep homes as inexpensive as possible. As the costs of extreme weather increase, that fight has spilled over into politics: Federal budget hawks want local policies that will reduce the cost of disasters, while many state and local officials worry about the lost tax revenue that might accompany tighter restrictions on development.

Harvey slammed ashore in Texas Friday with historic levels of rain and flooding. On Wednesday, the storm returned, making landfall a second time in southwestern Louisiana, which was devastated by Hurricane Katrina in 2005. At least 15 people have died so far in Texas, according to a count by the Austin American-Statesman newspaper. Thousands more have been displaced from their homes. Early estimates on damages range from $42 billion to more than $100 billion.

Contributing to the high losses is the fact that Texas, despite being one of the states most vulnerable to storms, has one of the most relaxed approaches to building codes, inspections, and other protections. It’s one of just four states along the Gulf and Atlantic coasts with no mandatory statewide building codes, according to the Insurance Institute for Business & Home Safety, and it has no statewide program to license building officials.

Read more: Harvey Costs Seen at Catastrophic Levels With Many Uninsured

Texas policies leave home building decision to cities, whose record is mixed: Corpus Christi uses codes that reflect national standards, minus the requirement that homes be built one foot above expected 100-year-flood levels, according to the Federal Alliance for Safe Homes. Nueces County, which encompasses Corpus Christi, has no residential building code whatsoever.

The consequence of loose or non-existent codes is that storm damage is often worse than need be. "Disasters don’t have to be devastating," said Eleanor Kitzman, who was Texas’s state insurance commissioner from 2011 to 2013 and now runs a company in South Carolina that constructs and finances coastal homes that are above code. "We can’t prevent the event, but we can mitigate the damage."

Any proposal that would increase costs in Texas draws push back from home builders, a powerful group in a state where people despise red tape about as much as they hate taxes.

"They are not big on regulation," said Julie Rochman, chief executive officer of the insurance institute. That skepticism about government was on display in 2013, when the state’s two senators voted against additional federal funding to clean up after Superstorm Sandy. But it can be applied selectively: Governor Greg Abbott requested federal money for Hurricane Harvey before it even made landfall.

Building codes elicit little support in Austin. At the end of this year’s state legislative session, the Texas Association of Builders posted a document boasting of its success at killing legislation it didn’t like. That included a bill that would have let cities require residential fire sprinklers, and another that would have given counties with 100,000 people or more authority over zoning, land use and oversight of building standards--something the builders’ group called “onerous.”

Ned Muñoz, vice president of regulatory affairs for the Texas Association of Builders, said cities already do a good job choosing which parts of the building code are right for them. And he argued that people who live outside of cities don’t want the higher prices that come with land-use regulations.

Muñoz said his association’s target is "unnecessary and burdensome government regulations, which increase the price of a home."

The fight in Texas is a microcosm of a national battle. The International Code Council, a Washington nonprofit made up of government officials and industry representatives, updates its model codes every three years, inviting state and local governments to adopt them. Last year, the National Association of Home Builders boasted of its prowess at stopping the 2018 codes it didn’t like.

"Only 6 percent of the proposals that NAHB opposed made it through the committee hearings intact," the association wrote on its blog. Some of the new codes that the home builders blocked had been proposed by the Federal Emergency Management Agency -- the same agency that’s on the hook when homes collapse, flood or wash away. And when FEMA is on the hook, it’s really the taxpayer.

Ron Jones, an NAHB board member and builder in Colorado who is critical of the organization’s views on codes and regulations, said that while the focus now should be on helping people hurt by Harvey, he hoped the storm would also force new thinking.

"There’s no sort of national leadership involved," said Jones. "For them it’s just, ’Hell, we’ll rebuild these houses as many times as you’ll pay us to do it.’"

The home builders demonstrated their power again this year, when President Donald Trump reversed an Obama administration initiative restricting federally funded building projects in flood plains. "This is a huge victory for NAHB and its members," the association wrote on its blog.

Yet on other issues, Trump’s administration appears to be siding with those who favor tougher local policies. In an interview just before Harvey, FEMA chief Brock Long expressed support for an Obama administration proposal to spur more local action on resilience, such as better building codes, as a condition of getting first-dollar disaster relief from Washington.

"I don’t think the taxpayer should reward risk," Long told Bloomberg in the interview, four days before Harvey slammed into Texas.

It may seem surprising that a Republican administration would side with its Democratic predecessor on anything, especially something related to climate change. But the prompt is less ideological that practical. Over the past decade, the federal government spent more than $350 billion on disaster recovery, a figure that will almost certainly increase without major changes in local building codes and land use practice.

And much of that money goes to homes that keep getting hit. That’s true for the National Flood Insurance Program, which Congress must reauthorize by the end of next month; some lawmakers, and Long himself, have said homes that repeatedly flood should be excluded from coverage. But there are also 1.3 million households that have applied for federal disaster assistance money at least twice since 1998 -- many of them in the same areas hit hardest by Hurricane Harvey.

In his interview, Long said his focus as FEMA’s administrator will be working with cities and states to ensure that areas hit by disasters get rebuilt differently, in a way that’s more resilient.

Some state lawmakers acknowledge the need to at least consider doing things differently. Todd Hunter, who represents this part of the coast in the Texas House of Representatives, said Harvey will ignite a conversation about the need for statewide building codes.

"The discussion needs to start," Hunter said in an interview at his law office overlooking downtown Corpus Christi, where many of the buildings visible out his window were still without power. And that discussion could go beyond codes: "We need to take a look at where structures are being built."

Others are less optimistic. If people living along the Texas coast had to pay the full cost of the risk they face, some parts of that coast might empty out. That’s what worries Albert Betts Jr, executive director of the Texas Insurers Council, the trade association representing insurance companies. And it’s why he thinks Hurricane Harvey won’t shift public policy.

It’s not that Betts doesn’t like a fight. At his office off a freeway in Austin, his desk is adorned with a hammer on one side and a miniature punching bag on the other. But Betts said the price of real change could be too high.

"I can’t sit here and tell you, as a Texan, that I don’t want that area developed," Betts said. "Smarter people than me have yet to figure that out."

— With assistance by Peter Coy

Love of Coastal Living Is Draining U.S. Disaster Funds

Welcome to Alabama’s Dauphin Island, where it floods all the time.

By Christopher Flavelle, August 31, 2017

Dauphin Island has one of the highest concentrations of chronically flooded homes in the U.S. Photographer: Nicole Craine/Bloomberg

About 400 miles east of Houston, off the coast of Alabama, Dauphin Island was spared the initial impact of Hurricane Harvey. Yet this tiny sliver of land near the mouth of Mobile Bay is just as important as the battered metropolis to the debate over American disaster policy. That’s because the 14-mile-long island, home to only about 1,300 people, floods again and again, hurricane or no hurricane. And again and again, those homes are repaired and rebuilt, largely at the expense of U.S. taxpayers.

The homes on Dauphin Island are among the 5 million that are covered by the National Flood Insurance Program. Founded in 1968 to make sure homeowners in flood-prone areas could get affordable insurance, the program ends up paying most residential flood insurance claims in the U.S. Partly as a result, development along coasts and riverbanks and in flood plains has exploded over the past 50 years. So have claims for flood damages. The NFIP is now about $25 billion in debt.

On Sept. 30, the program is also set to expire. Some members of Congress would like to use a reauthorization vote to enact reforms, including perhaps kicking the most flood-exposed homes off the rolls. Given the pressure to deliver disaster relief quickly post-Harvey, it’s improbable that any initial agreement to extend the NFIP past Sept. 30 will lead to significant reforms. More likely, a short-term extension may fund it through the end of the year. But the problem won’t go away. And the debate is just getting started.

The issues surrounding the NFIP go beyond just insurance and straight to the costs of climate change—specifically, whether the government will concede that the most vulnerable places simply can’t be protected. While hurricanes contribute greatly to costs, putting a sudden spotlight on the insurance issue, it’s the chronic flooding that happens away from the public eye, in places such as Dauphin Island, that slowly drains the NFIP. The island has one of the country’s highest concentrations of houses that the Federal Emergency Management Agency calls “severe repetitive loss”—those that flood most often. The owners of those 84 properties have gotten almost $17 million since 1978, an average of $199,244 each.

While many areas that depend on flood insurance are poor, the overwhelming majority of buildings on Dauphin’s most vulnerable beaches are vacation homes and rentals. When Texas Republican Jeb Hensarling, chairman of the House Financial Services Committee, argued this year against single mothers being “forced to pay for insurance for some millionaire’s beachfront vacation home,” he could’ve been talking about this town.

Federal payments to places such as Dauphin Island can seem like a reward for stubbornness. In 1979, Hurricane Frederic destroyed the only bridge to the island; in 2004 and 2005, Ivan and Katrina wiped out about 500 homes. Federal funds replaced the bridge and the houses. But officials on the island argue they’ve been left behind, getting just enough money to limp along but not enough to survive the next big storm. After Sandy, Washington agreed to spend almost $1 billion to fortify a single stretch of the New Jersey shore with dunes. Jeff Collier, mayor of Dauphin Island, would like the same treatment. “Nobody’s ever come in and forced sand down my throat,” he says. “I wish they would. Why are we more expendable than anybody else?”

As climate change worsens, it’s getting harder for Congress to pretend there’s enough money to save everyone. John Kilcullen, Mobile County’s director of plans and operations for emergency management, says he thinks the best way to stop history from repeating itself on Dauphin is for the federal government to buy houses and tear them down, especially along the island’s low-lying western section—the same stretch that boasts the biggest, most expensive properties. He’s pessimistic about how many of those homeowners would be willing to sell. “People don’t like being told they can’t live somewhere,” Kilcullen says. “I don’t think it’s politically feasible.”

Explosive costs have a way of changing people’s minds. The U.S. spent more than $350 billion over the past decade responding to natural disasters, yet FEMA budgeted only $90 million this year to prepare communities for those catastrophes. “These storms are breaking the back of FEMA,” says Roberta Swann, director of the Mobile Bay National Estuary Program. “If FEMA starts to say, ‘If a hurricane comes and takes your house, we’re done’—that is going to change the face of coastal communities.”

And yet, if the U.S. wants to stop racking up taxpayer-funded losses, those communities do have to change. The NFIP paid out about $16.3 billion after Katrina in 2005 and $8.4 billion for Sandy in 2012, according to the Insurance Information Institute. That same year, Congress passed a law that overhauled the program, making it significantly more expensive to insure a home in a flood-prone area. The law, known as the Biggert-Waters Flood Insurance Reform Act of 2012, was the product of a rare alliance between environmentalists and Tea Party activists. After it went into effect, some coastal homeowners saw their annual flood insurance premiums spike to $20,000 or more from as low as $600. Two years later, as opposition grew, Congress rolled back most of the reforms.

“Nobody’s ever come in and forced sand down my throat,” says Dauphin Island Mayor Jeff Collier. “I wish they would. Why are we more expendable than anybody else?”Photographer: Nicole Craine/Bloomberg

Last year theNatural Resources Defense Council won a lawsuit seeking to uncover how many homes FEMA has designated severe repetitive loss. The data the agency was forced to release showed that about 30,000 properties had cost taxpayers $5.5 billion since 1978. “Repeatedly flooded properties cost the nation billions in avoidable losses and repeated trauma to the families that live there,” says Rob Moore, a water-policy expert at the council. “We should be offering assistance to move to higher ground before they run out of options, not after.”

In 2016, FEMA bought 960 severe repetitive loss homes to tear down; at that rate, it would take until midcentury to buy all 30,000. By then, as many as 265 towns and cities will suffer what the Union of Concerned Scientists in a July report called “chronic inundation”—flooding that covers at least 10 percent of their land an average of twice a month. “The human inclination to stay in place, and to resist change at all costs, is pretty strong,” says Erika Spanger-Siegfried, an analyst at the Union of Concerned Scientists and a lead author of the report. “It will catch up with those locations,” she says, “and they will need to move eventually—but at a greater cost.”

On Dauphin Island, beachfront homes are still going up. Yet some residents may be ready to leave. After Katrina, 35 homeowners applied for federal buyouts, says Mayor Collier, but none were approved. If the government doesn’t either buy out homes or fortify the beach, it will just cost more when the next hurricane hits, Collier argues, while standing amid homes under construction. Bobcat loaders zoom by, returning sand blown off the ever-shrinking beach. “Pay me now,” he says, “or pay me later.”

BOTTOM LINE - While big storms grab headlines, chronic flooding in Dauphin Island and places like it are slowly draining the NFIP dry, forcing hard questions about where we live.

Now Comes the Uncomfortable Question: Who Gets to Rebuild After Harvey?

Kate Aronoff for Bloomberg , August 30 2017, 1:50 p.m.

In the decade-plus since Hurricane Katrina, Houston has been debating what it needs to protect itself from a catastrophic weather event. While that conversation has gone on, the city did what most cities do: carried on business as usual, constructing more than 7,000 residential buildings directly in Harris County’s Federal Emergency Management Agency-designated flood plain since 2010.

That kind of construction spree, which was detailed in a prescient investigation by ProPublica and the Texas Tribune last year, is not just head-slapping in hindsight, it actually produces its own greater risks, as it means paving over the kinds of wetlands that can help buffer against extreme flooding events.

But from a financial perspective, there was a logic behind those 7,000 buildings, as each is eligible for subsidized flood protection through the National Flood Insurance Program, that just so happens to be expiring at the end of September, meaning debate over its reauthorization will compete for time with everything else on a packed fall congressional calendar.

“Harvey should be a wake-up call” for deep flood insurance reform, says Rob Moore, a senior policy analyst with the Natural Resource Defense Council’s water team. “One of the biggest shortcomings is that the program has always been intended first and foremost to be a rebuilding program.”

While environmental groups, including the Environmental Defense Fund and the National Resources Defense Council, contend that the NFIP encourages irresponsible forms of coastal development, they have odd company in a coalition fighting for the program’s reform – insurance companies and free market groups that just want the government out.

Hurricane Harvey is likely to focus attention on questions the federal government and the American public have in large part worked hard to avoid, because they don’t cut cleanly along ideological or partisan lines. And because they’re not much fun to think about. What does home ownership look like in an age of climate change? When is it OK to rebuild, and when is it time to retreat?

The 30 Texas counties declared disaster areas by Gov. Greg Abbott are home to nearly 450,000 policies underwritten by the National Flood Insurance Program, a FEMA subsidiary. The NFIP was $24.6 billion in debt before Harvey made landfall, and can borrow just $5.8 billion more from the Treasury before the program’s current round of congressional reauthorization expires. Waters are still rising, and it will be several days or even weeks before damage totals are assessed, but early insurance industry estimates say the figure could match costs incurred from Hurricane Katrina, the most expensive disaster in U.S. history. Homeowners hit by the storm will now enter into a months- and potentially years-long claims fulfillment process, attempting to rebuild what they lost.

As lawmakers return to Washington early next month to determine the future of the beleaguered agency, there are bigger questions on the table than how to make the NFIP financially solvent. Both sides of the aisle agree that the program is unsustainable as is; FEMA is predicting that the NFIP’s rolls could increase by 130 percent by century’s end. Now — at what might be the worst possible moment to do so — Congress is essentially tasked with deciding who gets to live on the country’s most vulnerable coastlines.

When it was founded in 1968, the NFIP set out to fill a gap left by the private market. Unlike most other kinds of insurance, which protect policyholders against the costs of things like car accidents and medical expenses, the payouts for floods come all at once and in huge numbers, meaning many insurers lack the cash on hand necessary to keep up with an influx of claims. While NFIP policies are underwritten by the federal government, they’re administered by private insurers, who make a percentage of the policies they write. Homeowners in flood plains with federally backed mortgages are required to purchase flood insurance, though anyone can purchase an NFIP policy. For those required to take out policies, failure to pay can result in foreclosure.

Starting with hurricanes Katrina and Rita in 2005, the program has been hit with a barrage of massive payouts that have sunk it deeper and deeper into the red. Flooding in the Midwest compounded that debt in 2008, followed soon after by Hurricane Ike. Sandy tacked on another $8 billion in 2012, and there have been several major flooding events since.

Of course, the NFIP is just one program in a web of agencies dealing with disaster mitigation and response, ranging from FEMA’s emergency relief operations to the Department of Housing and Urban Development, which administers block grants to help residential buildings recover from destructive storms. Beyond the fact that its services are relatively difficult for homeowners to access — requiring a special kind of patience for and finesse with navigating layers of bureaucratic red tape — the NFIP’s benefits aren’t immediately recognizable just after a storm hits. Where few question how much FEMA is spending to respond to a storm like Harvey, the fact that the NFIP’s benefits play out quietly and over the longer term makes it easier to point to its debt overhang as an example of funds poorly spent.

The key tension now plaguing the NFIP is that it has always had to balance two often competing goals. “It’s really a conflict between the actuarial tendency of the program and a general desire among policymakers that the prices not be too high,” says Rebecca Elliott, a London School of Economics sociologist who researches the program. Severe Repetitive Loss Properties — those that have been consistently damaged and rebuilt — account for just 1 percent of policies but as much as 30 percent of the funds paid out in claims. Many of the residential properties that carry NFIP policies have also been “grandfathered” in, meaning that policyholders can keep paying existing rates as their flood risk increases.

Among the program’s biggest problems is just how high-risk its ratepayers are. In the case of health care, for instance, insurance operates on the basic principle that healthy policyholders with cheaper coverage should help pay the way for those who are sicker and more expensive. Referencing that system, Elliot says, “Right now we have the sickest people in the pool. The people who require insurance are only the high-risk flood zone. We know that’s not a great recipe for insurance pools.”

Yet whereas most kinds of public assistance in America benefit poor and working-class people, what’s saved the NFIP from outright attacks like the ones visited on Medicaid has been the fact that its beneficiaries also include the 1 percent and their waterfront homes. Consequently, NFIP reauthorization has always tended to be a rare spot of bipartisan alignment in Congress. “Democrats and Republicans both live in floodplains,” Elliot explains. If anything, the splits are regional, with representatives from flood-prone and politically divergent states like New York, Louisiana, and California coming together to keep rates artificially low.

In part because of who the NFIP serves, reform efforts have proven controversial. The last major overhaul of the program was passed into law just before Hurricane Sandy struck New York and New Jersey in 2012. That legislation, Biggert-Waters, sailed through Congress uncontroversially, and would have brought NFIP policies up to market rate on a rapid timeline and eliminated the program’s grandfather clause.

Sanitation workers throw out debris from a flood-damaged home in Oakwood Beach in Staten Island, N.Y., on Feb. 5, 2013. Photo: Spencer Platt/Getty Images

Post-Sandy those measures got much less popular. “Families who had their lives destroyed were going to rebuild and finding out that … if they were going to rebuild their homes, they might not be able to afford to insure them,” Elliott told me. Deductibles would have jumped by 25 percent per year. Blowback was so intense that one of the bill’s sponsors, Maxine Waters, D-Calif., became a major voice to rally successfully against its implementation. The result of that fracas was the Homeowners Flood Insurance Affordability Act in 2014, which reintroduced grandfathering and allowed rates to rise more gradually. “It appeased the protest for the time being, but the rates are still going up,” Elliott says. “All of the things that homeowners were upset about are still taking place, just on a slower timeline.”

One thing that bill also lifted from Biggert-Waters was a study of how those rate hikes would impact low-income owners. “HFIAA was a recommitment to the idea that the NFIP was now going to be in the business of adjudicating who truly needed support to pay for their flood insurance. Who was getting a good deal but didn’t actually need it? It’s basically a way of introducing means tests to a program that didn’t have it before,” Elliott adds.

The larger balance of forces surrounding flood insurance reform might be among the most novel that American politics has to offer. Pushing against an overhaul are major developers and real estate interests, like the National Realtors Association. One of the biggest voices pushing for it has been a coalition called Smarter Safer, an active proponent for Biggert-Waters that encompasses everyone from big environmental groups to insurance companies to conservative free market think tanks. For different reasons, all of them want to see rates that are more actuarially sound and less heavily subsidized by taxpayers.

“We’re concerned about the cost to taxpayers,” says Steve Ellis, vice president of the group Taxpayers for Common Sense, who’s worked on multiple rounds of NFIP reauthorization. He says that for many, Biggert-Waters was “too much too fast,” as well as a case of bad timing. Ultimately, though, he hopes the reforms it outlined come to pass. “We want to see a program that charges closer to risk-based rates and sheds as much of the risk off the program and into the private sector,” with the private sector playing an increasingly large role in underwriting policies. Eventually, Ellis hopes the NFIP will become a flood insurer of last resort.

Other Smarter Safer members are more gung-ho. Groups, like the Heartland Institute spinoff R Street, a free market think tank, see the current structure of the NFIP as an expensive entitlement and want to get the government out of the flood insurance business entirely. Among the other voices calling on the federal government to shoulder less of the cost is current FEMA Administrator Brock Long, who said just before Harvey made landfall that he doesn’t “think the taxpayer should reward risk going forward.”

How exactly Harvey will impact the NFIP’s reauthorization remains to be seen, though even before the storm the chances for wholesale overhaul were slim. Two fledgling proposals in the Senate — one from Sens. Bill Cassidy, R-La., Kirsten Gillibrand, D-N.Y., and Shelley Moore Capito, R-W.Va., and another from Sen. Bob Menendez, D-N.J., with several co-sponsors — would each extend the reauthorization beyond five years, as well as increase oversight over the private insurers who write FEMA-backed claims. If passed, the Cassidy-Gillibrand bill would “[remove] barriers to privatization,” according to draft language, by allowing the companies that currently administer to policies to start underwriting them as well. Another proposal, passed through the House Financial Services Committee and praised by Smarter Safer, would also see the private sector play a larger role.

While few people are opposed outright to giving the private sector a bigger role to play, experts are leery of viewing privatization as a solution to what ails the NFIP. For one, private insurers’ involvement thus far has been hugely problematic. A report released last year by New York Attorney General Eric Schneiderman accused FEMA-contracted policy writers of widespread fraud, noting that a “lack of transparency and accountability can and does lead to inflated costs for services.” An earlier investigation by NPR found that private insurers made $400 million in Sandy’s aftermath, and that nearly a third of all NFIP funds spent between 2011 and 2014 went directly to private insurers instead of to homeowners. The more structural danger of privatization is that insurance companies could simply skim off the most profitable rates, leaving the NFIP to pick up the most costly policies. In health care terms, then, privatization could render an already sick pool even sicker.

The even bigger policy question is whether higher and more competitive rates will actually incentivize fewer people to live along high-risk coastlines, or just leave the shore open only to those wealthy homeowners and developers who can afford higher rates and round after round of rebuilding. President Donald Trump also repealed an Obama-era mandate for flood-prone construction, so there’s no guarantee that new shorefront structures will be able to withstand future damage. The result of higher rates, Elliot predicts, “is the socioeconomic transformation along with the physical transformation of the coastlines.”

Of course, the elephant wading through the flood is the fact that there are now millions of people living in areas that shouldn’t be inhabited at all, no matter the cost. “There’s the uncertainty of living at risk,” Elliot says, “and there’s the uncertainty of what it means to stay in your community when in the near to medium term, it’s going to become more expensive for you to do so — and in the long term, physically impossible.”

People are stranded on a roof due to flood waters from Hurricane Katrina in New Orleans, on Aug. 30, 2005. Photo: Vincent Laforet/Pool/AFP/Getty Images

Virtually no one thinks that the program’s rate structure is politically sustainable as is, and transformational changes to American coastlines will happen regardless of what happens to the NFIP. Beyond destructive storms becoming more common as a result of climate change, the sea level rise likely to happen over the next century will make several American cities more vulnerable to everyday flooding. Miami Beach today floods on a sunny afternoon. By 2100, for instance, Savannah, Georgia, could face two crippling floods per month. Along the same timeline, the Bronx may become the only New York City borough spared from a similar form of chronic inundation, a coin termed by the Union of Concerned Scientists to describe persistent flood risk. In all, sea level rise could force as many as 13 million Americans to relocate by the end of the century.

What’s now up for debate is when and how the retreat happens, and on whose terms. The NRDC and others are advocating to incorporate buyout programs more fully into the NFIP, allowing homeowners in persistent-risk properties to sell their homes back to the government at pre-flood prices, after which point the land is returned to wetlands, waterfront parks, or some other kind of absorbent public space. “The temptation is to always try to identify some one-size-fits-all solution,” Moore says. “We’re at a point where we know that’s not the case for these issues. For NRDC, the real emphasis needs to be on how we help low- and middle-income homeowners live in a place that is far from flooding.”

There were some 40,000 buyouts of high-risk properties between 1993 and 2011, mostly for homes along rivers in inland states. When people receive relocation funds, though, it tends to be through state or locally run pilot projects, and there’s not yet enough data to determine how effective these programs have been in the long run. So while there is some precedent in the U.S. for helping coastal residents move away from high-risk flood zones, there is still no federal plan to make moving a realistic possibility for low- and middle-income homeowners. Examples of how to do it well are few and far between.

What MIT postdoctoral fellow Liz Koslov has found in her research into planned retreat is that there are plenty of communities ready and willing to move — at least under the right circumstances. “I was shocked about how positive they were about the process,” she says of the people she met doing field work on Staten Island, where two neighborhoods successfully lobbied Albany for relocation funds. “I expected a lot more negative criticisms, but people talked about it as a process they found very empowering. It started as a bottom-up effort, without any sort of government relocation plan. … It really helped that people were hearing about it from their neighbors, in community meetings.”

Still, buyouts are no quick fix, and scaling them up at the national level would take careful planning. “From the federal and state governments’ point of view, buyouts are great. They save a huge amount of money, absorb water, and can keep nearby properties safer,” Koslov says. At the local level, though, relocation plans run the risk of bringing “all the cost and none of the benefits.” Especially for cities without a tax base as big as New York City’s, relocation can erode funds for things like schools and public services, and potentially even whole communities. That’s why New York state’s post-Sandy relocation plan included a 5 percent bonus for homeowners who chose to relocate within the county. Blue Acres, a similar post-Sandy buyout program in New Jersey, has faced hostility from towns weary of a population drain.

Even those communities that do want to relocate face serious barriers at the federal level. FEMA currently requires a disaster to have been declared in order for the agency to release relocation funds. That’s why New York and New Jersey were each able to launch relatively successful buyout programs post-Sandy, but why Alaska — where tribal nations’ land is under rapid threat from erosion — have been unable to do the same despite years of organizing.

The Staten Island neighborhoods that ended up being relocated also only received their buyouts after “months of grassroots lobbying and protests and petitions and an enormous amount of collective effort,” Koslov says. In both neighborhoods, she adds, the main populations were white, middle-class, and home-owning public sector employees (cops, firefighters, postal workers) who had retired but were young enough to use their time actively — ”the last people you would expect to collectively organize for climate change adaptation,” Koslov says. Both areas voted overwhelmingly for Trump. Aside from some obvious concerns for renters — whose landlords would have to be bought out — it remains to be seen whether working-class neighborhoods of color — like many of those likely to be worst hit by flooding — would see the same results.

There are few easy answers for how coastal communities can adapt to rising tides and more volatile weather, let alone for how to make the NFIP more sustainable in that context. What does seem clear is that leaving the responsibility of planning for and adapting to climate change up to individuals may be the wrong path for disaster policy in the 21st century. “Individuals vary tremendously in their ability to respond rationally to incentives,” Elliot says. “My sense is that when it comes to the problem of what habited places are going to look like in a future defined by climate change, that’s a collective problem that’s going to require more collective solutions.”

In terms of policy, that could mean making it easier for communities to organize and access funding like Staten Island residents have post-Sandy, or even attaching a rider to different types of insurance policies, spreading the financial risk of flooding around to more than just the people most likely to be effected in the next several years.

The need for a more collective response has implications beyond specific measures though, and extends well outside the scope of flood insurance. Amid the chaotic reports of the situation unfolding in Houston are also those of good Samaritans rescuing total strangers, the kind of post-disaster solidarity writer Rebecca Solnit has called “a paradise built in hell.” Paradise may not be on the docket as the NFIP’s future gets determined this month, but Harvey may be a chance for lawmakers to start thinking more holistically about how to deal with and prevent the kinds of hell climate change is all too likely to keep dishing out.

Full report

https://projects.propublica.org/houston-cypress/?utm_campaign=sprout&utm_medium=social&utm_source=twitter&utm_content=1503846942

Floods in India, Bangladesh and Nepal kill 1,200 and leave millions homeless

Authorities say monsoon flooding is one of the worst in region in years

Chloe Farand, Wednesday 30 August 2017 17:41 BST

The Independent Online

At least 1,200 people have been killed and millions have been left homeless following devastating floods that have hit India, Bangladesh and Nepal, in one of the worst flooding disasters to have affected the region in years.

International aid agencies said thousands of villages have been cut off by flooding with people being deprived of food and clean water for days.

South Asia suffers from frequent flooding during the monsoon season, which lasts from June to September, but authorities have said this year's floods have been much worse.

In the eastern Indian state of Bihar, the death toll has risen to more than 500, the Straits Times reported, quoting disaster management officials.

The paper said the ongoing floods had so far affected 17 mllion people in India, with thousands sheltered in relief camps.

Anirudh Kumar, a disaster management official in Patna, the capital of Bihar, a poor state known for its mass migration from rural areas to cities, said this year's farming had collapsed because of the floods, which will lead to a further rise in unemployment in the region.

In the northern state of Uttar Pradesh, reports said more than 100 people had died and 2.5 million have been affected.

In Mumbai, authorities struggled to evacuate people living in the financial capital's low-lying areas as transport links were paralysed and downpours led to water rising up to five feet in some parts of the city.

Weather officials are forecasting heavy rains to continue over the next 24 hours and have urged people to stay indoors.

Partially submerged houses are seen at a flood-affected village in Morigaon district in the northeastern state of Assam, India. (REUTERS/Anuwar Hazarika)

In neighbouring Bangladesh, at least 134 have died in monsoon flooding which is believed to have submerged at least a third of the country.

More than 600,000 hectares of farmland have been partially damaged and in excess of 10,000 hectares have been completely washed away, according to the disaster minister.

Bangladesh's economy is dependent on farming and the country lost around a million tonnes of rice in flash floods in April.

"Farmers are left with nothing, not event with clean drinking water," said Matthew Marek, the head of disaster response in Bangladesh for the International Federation of Red Cross and Red Crescent.

In Nepal, 150 people have been killed and 90,000 homes have been destroyed in what the UN has called the worst flooding incident in the country in a decade.

According to the Red Cross, at least 7.1 million people have been affected in Bangladesh - more than the population of Scotland - and around 1.4 million people have been affected in Nepal.

The disaster comes as headlines have focused on the floods in Houston, Texas, which authorities have described as "unprecedented".

Officials in Texas have said the death toll now stands at 15 in the wake of Hurricane and Tropical Storm Harvey, with thousands forced to flee their homes.

The rise in extreme weather events such as hurricanes and floods have been identified by climate scientists as the hallmark of man-made climate change.

The US has seen two of its worst storms ever, Hurricane Harvey and Hurricane Katrina, in just over a decade.

India's Prime Minister, Narendra Modi, has said climate change and new weather patterns are having "a big negative impact".

Third-hottest June puts 2017 on track to make hat-trick of hottest years

June 2017 was beaten only by June in 2015 and 2016, leaving experts with little hope for limiting warming to 1.5C or even 2C

The latest figures cement estimations that warming is now at levels not seen for 115,000 years.

First published on Wednesday 19 July 2017 00.17 EDT

Last month was the third-hottest June on record globally, temperature data suggest, confirming 2017 will almost certainly make a hat-trick of annual climate records, with 2015, 2016 and 2017 being the three hottest years since records began.

The figures also cement estimations that warming is now at levels not seen for 115,000 years, and leave some experts with little hope for limiting warming to 1.5C or even 2C.

According to new figures from the US National Oceanic and Atmospheric Administration (Noaa), June 2017 was the third-hottest June on record, beaten only by the two preceding Junes in 2015 and 2016.

The Noaa data show combined land and sea-surface temperatures for June 2017 were 0.82C above the 20th century average, making a string of 41 consecutive Junes above that average.

June 2016 still holds the record at 0.92C above the 20th century average, followed by June 2015 which was 0.89C above the baseline.

The data line up closely with Nasa figures released last week, which are calculated slightly differently, finding the month was the fourth-hottest on record – with June 1998 also being warmer in their data set.

Based on the Nasa data, climate scientist and director of Nasa’s Goddard Institute for Space Studies Gavin Schmidt estimated that 2017 was probably going to be the second-warmest year on record after 2016, but would almost certainly be among the top three hottest years.

Gavin Schmidt

(@ClimateOfGavin)

With update to Jun, 2017 will almost certainly be a top 3 year in the GISTEMP record (most likely 2nd warmest ~57% chance). pic.twitter.com/jiR6cCv1x8

July 15, 2017

The June data see all of the first six months of 2017 sitting among the three warmest months on record, making it the second-hottest first half of a year on record – again, beaten only by the previous year.

The near-record temperatures continued this year despite the passing of El Niño, which normally warms the globe, and its opposite – La Niña – currently suppressing temperatures.

The warming trend is almost certainly caused by greenhouse gas emissions – mostly the result of burning fossil fuels – with many studies showing such warm years would be almost impossible without that effect.

Last year, Michael Mann from Pennsylvania State University published a paper showing the then-record temperatures in 2014 would have had less than a one in a million chance of occurring naturally.

“We have a follow-up article that we’ve submitted showing that the likelihood of three consecutive record-breaking years such as we saw in 2015-2017 was similarly unlikely,” he told the Guardian over email. “In short, we can only explain the onslaught of record warm years by accounting for human-caused warming of the planet.”

Andy Pitman from the University of New South Wales in Sydney, Australia said the onslaught of very rapid warming in the past few years is likely a result of the climate system “catching up” after a period of relative slow warming caused by natural variability – the so-called “hiatus”.

“I do not think the recent anomalies change anything from a science perspective,” he said. “The Earth is warming at about the long-term rates that were expected and predicted [by models].”

But Pitman said the ongoing trend was “entirely inconsistent” with the target of keeping warming at just 1.5C above pre-industrial temperatures.

Current trends suggest the 1.5C barrier would be breached in the 2040s, with some studies suggesting it might happen much sooner.

“In my view, to limit warming to 2C requires both deep and rapid cuts and a climate sensitivity on the lower end of the current range,” Pitman said. “I see no evidence that the climate sensitivity is on the lower end of the current range, unfortunately.”

“It would be a good idea to cut greenhouse gas emissions rather faster than we are.”

"Heritage at Risk: How Rising Seas Threaten Ancient Coastal Ruins"

"The shores of Scotland’s Orkney Islands are dotted with ruins that date to the Stone Age. But after enduring for millennia, these archaeological sites – along with many others from Easter Island to Jamestown – are facing an existential threat from climate change."

"Perched on the breathtaking Atlantic coast of Mainland, the largest island in Scotland’s Orkney archipelago, are the remains of the Stone Age settlement of Skara Brae, dating back 5,000 years. Just feet from the sea, Skara Brae is one of the best preserved Stone Age villages in the world — a complex of ancient stone house foundations, walls, and sunken corridors carved out of the dunes by the shore of the Bay of Skaill. Fulmars and kittiwakes from the vast seabird colonies on Orkney’s high cliffs wheel above the coastal grassland of this rugged island, 15 miles from the northern coast of the Scottish mainland. On a sunny day, the surrounding bays and inlets take on a sparkling aquamarine hue.

Older than the Egpyptian pyramids and Stonehenge, Skara Brae is part of a UNESCO World Heritage site that also includes two iconic circles of standing stones — the Ring of Brodgar and the Stones of Stenness — and Maeshowe, an exquisitely structured chambered tomb famous for its Viking graffiti and the way its Stone Age architects aligned the entrance to catch the sun’s rays at the winter solstice. These sites, situated just a few miles from Skara Brae, are part of an elaborate ceremonial landscape built by Orkney’s earliest farmers.

Skara Brae and the neighboring sites have weathered thousands of years of Orkney’s wild winters and ferocious storms, but they may not outlive the changing climate of our modern era. As seas rise, storms intensify, and wave heights in this part of the world increase, the threat grows to Skara Brae, where land at each end of its protective sea wall — erected in the 1920s — is being eaten away. Today, as a result of climate change, Skara Brae is regarded by Historic Environment Scotland, the government agency responsible for its preservation, as among Scotland’s most vulnerable historic sites."

Satellite snafu masked true sea-level rise for decades

Revised tallies confirm that the rate of sea-level rise is accelerating as the Earth warms and ice sheets thaw.

Jeff Tollefson on 17 July 2017

As the Greenland ice sheet thaws, it is helping to raise the world's sea levels.

The numbers didn’t add up. Even as Earth grew warmer and glaciers and ice sheets thawed, decades of satellite data seemed to show that the rate of sea-level rise was holding steady — or even declining.

Now, after puzzling over this discrepancy for years, scientists have identified its source: a problem with the calibration of a sensor on the first of several satellites launched to measure the height of the sea surface using radar. Adjusting the data to remove that error suggests that sea levels are indeed rising at faster rates each year.

“The rate of sea-level rise is increasing, and that increase is basically what we expected,” says Steven Nerem, a remote-sensing expert at the University of Colorado Boulder who is leading the reanalysis. He presented the as-yet-unpublished analysis on 13 July in New York City at a conference sponsored by the World Climate Research Programme and the International Oceanographic Commission, among others.

Nerem's team calculated that the rate of sea-level rise increased from around 1.8 millimetres per year in 1993 to roughly 3.9 millimetres per year today as a result of global warming. In addition to the satellite calibration error, his analysis also takes into account other factors that have influenced sea-level rise in the last several decades, such as the eruption of Mount Pinatubo in the Philippines in 1991 and the recent El Niño weather pattern.

The results align with three recent studies that have raised questions about the earliest observations of sea-surface height, or altimetry, captured by the TOPEX/Poseidon spacecraft, a joint US–French mission that began collecting data in late 1992. Those measurements continued with the launch of three subsequent satellites.

“Whatever the methodology, we all come up with the same conclusions,” says Anny Cazenave, a geophysicist at the Laboratory for Studies in Space Geophysics and Oceanography (LEGOS) in Toulouse, France.

In an analysis published in Geophysical Research Letters1 in April, Cazenave’s team tallied up the various contributions to sea-level rise, including expansion resulting from warming ocean waters and from ice melt in places such as Greenland. Their results suggest that the satellite altimetry measurements were too high during the first six years that they were collected; after this point, scientists began using TOPEX/Poseidon's back-up sensor. The error in those early measurements distorted the long-term trend, masking a long-term increase in the rate of sea-level rise.

The problem was first identified in 2015 by a group that included John Church, an oceanographer at the University of New South Wales in Sydney, Australia. Church and his colleagues identified a discrepancy between sea-level data collected by satellites and those from tide gauges scattered around the globe2. In a second paper published in June in Nature Climate Change3, the researchers adjusted the altimetry records for the apparent bias and then calculated sea-level rise rates using a similar approach to Cazenave’s team. The trends lined up, Church says.

Rising tide

Still, Nerem wanted to know what had gone wrong with the satellite measurements. His team first compared the satellite data to observations from tide gauges that showed an accelerating rate of sea-level rise. Then the researchers began looking for factors that could explain the difference between the two data sets.

The team eventually identified a minor calibration that had been built into TOPEX/Poseidon's altimeter to correct any flaws in its data that might be caused by problems with the instrument, such as ageing electronic components. Nerem and his colleagues were not sure that the calibration was necessary — and when they removed it, measurements of sea-level rise in the satellite's early years aligned more closely with the tide-gauge data. The adjusted satellite data showed an increasing rate of sea-level rise over time.

“As records get longer, questions come up,” says Gavin Schmidt, a climate scientist who heads NASA’s Goddard Institute for Space Studies in New York City. But the recent spate of studies suggests that scientists have homed in on an answer, he says. “It’s all coming together.”

If sea-level rise continues to accelerate at the current rate, Nerem says, the world’s oceans could rise by about 75 centimetres over the next century. That is in line with projections made by the Intergovernmental Panel on Climate Change in 2013.

“All of this gives us much more confidence that we understand what is happening,” Church says, and the message to policymakers is clear enough. Humanity needs to reduce its output of greenhouse-gas emissions, he says — and quickly. ”The decisions we make now will have impacts for hundreds, and perhaps thousands, of years.”

Nature 547, 265–266 (20 July 2017) doi:10.1038/nature.2017.22312

China’s outdated coal thermal power technology sold in Vietnam

VietNamNet Bridge - China’s policy on exporting its coal thermal power technology has raised concerns among environmentalists, who say Vietnam could become the destination for the outdated technology.

Nguyen Dinh Tuan, former rector of the HCMC University of Natural Resources and the Environment, said that serious pollution in China was behind the country's decision to export the technology.

“Previously, when China was poor, they had to accept this situation. But now, as China has become the world's second largest economy with foreign reserves of more than $3.2 trillion, it wants to stop this,” he said.

“One of the measures is shutting down coal-fired thermopower plants. And once it doesn’t need the technology anymore, it will export the technology to seek profit,” he explained.

The Chinese government is restructuring energy sources with priority given to clean renewable energy.

Meanwhile, developing coal-fired plants remains the choice of Vietnam.

China’s policy on exporting its coal thermal power technology has raised concerns among environmentalists, who say Vietnam could become the destination for the outdated technology.

Vietnamese state management agencies say that most of Vietnam’s coal-fired thermal power projects have EPC contractors from China.

Tuan said while many countries offer coal thermal power technology, Vietnam is still choosing technology and EPC contractors from China.

“Chinese contractors are winning bids by offering very low prices. After winning the bids, they cite many reasons to raise investment capital,” he explained.

In some cases, Chinese contractors were chosen because the contractors offered commissions to Vietnamese officials.

The expert pointed out that China may follow a ‘dual strategy’ – both exporting technology and selling coal.

Vietnam’s coal is not highly appreciated in use for coal-fired plants as it is not economical. Therefore, Vietnam sells its high-quality coal and imports lower-quality coal to run its thermal power plants. As the imports lack high quality, the pollution level is high.

China has been following the coal import policy for many years and has been buying coal from Vietnam. Meanwhile, China sells coal suitable to Chinese-technology power plants to Vietnam.

“It would be bad if Vietnam both imports old polluting technology and polluting material,” Tuan said.

According to Nguy Thi Khanh, director of GreenID, Vietnam would have to import 85 million tons of coal by 2030 to run power plants as shown in the seventh power development program.

Operating power plants, generating 13,000 MW a year, discharge 15 million tons of ash which still cannot be treated, causing serious pollution.

The pollution will be even more serious if Vietnam increases coal-fired thermal power capacity to 55,000 MW, which is four times higher than now.

Methane Seeps Out as Arctic Permafrost Starts to Resemble Swiss Cheese

Measurements over Canada's Mackenzie River Basin suggest that thawing permafrost is starting to free greenhouse gases long trapped in oil and gas deposits.

By Bob Berwyn, InsideClimate News

Jul 19, 2017

In parts of northern Canada's Mackenzie River Delta, seen here by satellite, scientists are finding high levels of methane near deeply thawed pockets of permafrost.

Global warming may be unleashing new sources of heat-trapping methane from layers of oil and gas that have been buried deep beneath Arctic permafrost for millennia. As the Earth's frozen crust thaws, some of that gas appears to be finding new paths to the surface through permafrost that's starting to resemble Swiss cheese in some areas, scientists said.

In a study released today, the scientists used aerial sampling of the atmosphere to locate methane sources from permafrost along a 10,000 square-kilometer swath of the Mackenzie River Delta in northwestern Canada, an area known to have oil and gas desposits.

Deeply thawed pockets of permafrost, the research suggests, are releasing 17 percent of all the methane measured in the region, even though the emissions hotspots only make up 1 percent of the surface area, the scientists found.

In those areas, the peak concentrations of methane emissions were found to be 13 times higher than levels usually caused by bacterial decomposition—a well-known source of methane emissions from permafrost—which suggests the methane is likely also coming from geological sources, seeping up along faults and cracks in the permafrost, and from beneath lakes.

The findings suggest that global warming will "increase emissions of geologic methane that is currently still trapped under thick, continuous permafrost, as new emission pathways open due to thawing permafrost," the authors wrote in the journal Scientific Reports. Along with triggering bacterial decomposition in permafrost soils, global warming can also trigger stronger emissions of methane from fossil gas, contributing to the carbon-climate feedback loop, they concluded.

"This is another methane source that has not been included so much in the models," said the study's lead author, Katrin Kohnert, a climate scientist at the GFZ German Research Centre for Geosciences in Potsdam, Germany. "If, in other regions, the permafrost becomes discontinuous, more areas will contribute geologic methane," she said.

Similar Findings Near Permafrost Edges

The findings are based on two years of detailed aerial atmospheric sampling above the Mackenzie River Delta. It was one of the first studies to look for sources of deep methane across such a large region.

Previous site-specific studies in Alaska have looked at single sources of deep methane, including beneath lakes. A 2012 study made similar findings near the edge of permafrost areas and around melting glaciers.

Now, there is more evidence that "the loss of permafrost and glaciers opens conduits for the release of geologic methane to the atmosphere, constituting a newly identified, powerful feedback to climate warming," said the 2012 study's author, Katey Walter Anthony, a permafrost researcher at the University of Alaska Fairbanks.

"Together, these studies suggest that the geologic methane sources will likely increase in the future as permafrost warms and becomes more permeable," she said.

"I think another critical thing to point out is that you do not have to completely thaw thick permafrost to increase these geologic methane emissions," she said. "It is enough to warm permafrost and accelerate its thaw. Permafrost that starts to look like Swiss cheese would be the type that could allow substantially more geologic methane to escape in the future."

Róisín Commane, a Harvard University climate researcher, who was not involved with the study but is familiar with Kohnert's work, said, "The fluxes they saw are much larger than any biogenic flux ... so I think a different source, such as a geologic source of methane, is a reasonable interpretation."

Commane said the study makes a reasonable assumption that the high emissions hotspots are from geologic sources, but that without more site-specific data, like isotope readings, it's not possible to extrapolate the findings across the Arctic, or to know for sure if the source is from subsurface oil and gas deposits.

"There doesn't seem to be any evidence of these geogenic sources at other locations in the Arctic, but it's something that should be considered in other studies," she said. There may be regions with pockets of underground oil and gas similar to the Mackenzie River Delta that haven't yet been mapped.

Speed of Methane Release Remains a Question

The Arctic is on pace to release a lot more greenhouse gases in the decades ahead. In Alaska alone, the U.S. Geological Survey recently estimated that 16-24 percent of the state's vast permafrost area would melt by 2100.

In February, another research team documented rapidly degrading permafrost across a 52,000-square-mile swath of the northwest Canadian Arctic.

What's not clear yet is whether the rapid climate warming in the Arctic will lead to a massive surge in releases of methane, a greenhouse gas that is about 28 times more powerful at trapping heat as carbon dioxide but does not persist as long in the atmosphere. Most recent studies suggest a more gradual increase in releases, but the new research adds a missing piece of the puzzle, according Ted Schuur, a permafrost researcher at Northern Arizona University.

Since the study only covered two years, it doesn't show long-term trends, but it makes a strong argument that there is significant methane escaping from trapped layers of oil and gas, Schuur said.

"As for current and future climate impact, what matters is the flux to the atmosphere and if it is changing ... if there is methane currently trapped by permafrost, we could imagine this source might increase as new conduits in permafrost appear," he said.

Solve Antarctica’s sea-ice puzzle

John Turner & Josefino Comiso on 19 July 2017

John Turner and Josefino Comiso call for a coordinated push to crack the baffling rise and fall of sea ice around Antarctica.

Antarctica's disappearing sea ice, from its peak in August 2016 to a record low on March 3, 2017.

Different stories are unfolding at the two poles of our planet. In the Arctic, more than half of the summer sea ice has disappeared since the late 1970s1. The steady decline is what global climate models predict for a warming world2. Meanwhile, in Antarctic waters, sea-ice cover has been stable, and even increasing, for decades3. Record maxima were recorded in 2012, 2013 and 2014

So it came as a surprise to scientists when on 1 March 2017, Antarctic sea-ice cover shrank to a historic low. Its extent was the smallest observed since satellite monitoring began in 1978 (see ‘Poles apart’) — at about 2 million square kilometres, or 27% below the mean annual minimum.

Researchers are struggling to understand these stark differences5. Why do Antarctica’s marked regional and seasonal patterns of sea-ice change differ from the more uniform decline seen around most of the Arctic? Why has Antarctica managed to keep its sea ice until now? Is the 2017 Antarctic decline a brief anomaly or the start of a longer-term shift6, 7? Is sea-ice cover more variable than we thought? Pressingly, why do even the most highly-rated climate models have Antarctic sea ice decreasing rather than increasing in recent decades? We need to know whether crucial interactions and feedbacks between the atmosphere, ocean and sea ice are missing from the models, and to what extent human influences are implicated6.

What happens in the Antarctic affects the whole planet. The Southern Ocean has a key role in global ocean circulation; a frozen sea surface alters the exchange of heat and gases, including carbon dioxide, between ocean and atmosphere. Sea ice reflects sunlight and influences weather systems, the formation of clouds and precipitation patterns. These in turn affect the mass of the Antarctic ice sheet and its contribution to sea-level rise. Sea ice is also crucial to marine ecosystems. A wide range of organisms, including krill, penguins, seals and whales, depend on its seasonal advance and retreat.

It is therefore imperative that researchers understand the fate of Antarctic sea ice, especially in places where its area and thickness are changing, and why. This requires bringing together understandings of the drivers behind the movement of the ice (through drift and deformation) as well as those that control its growth and melt (thermodynamics). Such knowledge underpins climate models; these need to better represent the complex interactions between sea ice and the atmosphere, ocean and ice sheet. What’s required now is a focused and coordinated international effort across the scientific disciplines that observe and model global climate and the polar regions.

Satellites provide the best spatial information on sea ice around Antarctica. Regular observations reveal how ice cover varies over days, years and decades8. Weather, especially storms with high winds, has a daily influence, as well as a seasonal one. Longer-term changes are driven by larger patterns in the temperature and circulation of the atmosphere and oceans.

But near-continuous satellite observations only reach back about four decades. Longer records are essential to link sea-ice changes to trends in climate. Information from ships’ logbooks, coastal stations, whale-catch records, early satellite imagery and chemical analyses of ice cores hint that sea-ice coverage might have been up to 25% greater in the 1940s to 1960s6.

Source: US National Snow and Ice Data Center

Collecting more ice cores and historical records, and synthesizing the information they contain, could reveal local trends. These would help to identify which climatic factors drive Antarctic sea-ice changes6. For instance, in 2017, the area most depleted of sea ice was south of the Eastern Pacific Ocean. This region has strong links to the climate of the tropics, including the El Niño–Southern Oscillation, suggesting that sea ice is sensitive to conditions far from the poles.

Another issue is how the balance between dynamics and thermodynamics drives the advance and retreat of ice. The thickness and volume of ice depend on many factors, including the flow of heat from the ocean to the atmosphere and to the ice. Sea ice influences the saltiness of the ocean. As the ocean freezes, salt enters the water; as the ice melts, fresh water returns to the sea. Such processes are difficult to measure precisely, having uncertainties to within 50–100% of the signal. And they are hard to model.

“Sea-ice coverage might have been up to 25% greater in the 1940s to 1960s.”

Satellite altimeters can measure the distance between the surfaces of the sea ice and ocean accurately, and this distance can be used to calculate the ice thickness. But it is hard to interpret these data without knowing how much snow is on the ice, its density and whether the snow’s weight pushes the ice surface below sea level (as is often the case). Calibrating and validating satellite data are crucial, as is developing algorithms to merge and analyse information from a variety of sources.

Ice, ocean and air must be sampled at appropriate intervals over a wide enough area to establish how they interact. Research ice-breaker cruises are essential for collecting in situ observations; one such was the US PIPERS (Polynyas, Ice Production and seasonal Evolution in the Ross Sea) campaign in 2017. But ships travel only along narrow routes and for a short time, typically 1–3 months.

Increasingly, autonomous instruments and vehicles — underwater, on-ice and airborne — are providing data throughout the year and from inaccessible or dangerous regions. These robotic systems are giving revolutionary new information and insights into the formation, evolution, thickness and melting of sea ice. Sensors mounted on marine mammals (such as elephant seals), or on floats and gliders, also beam back data on temperature, salinity and other physical and biogeochemical parameters. But to operate continuously, these instruments need to be robust enough to withstand the harsh Antarctic marine environment.

Improve models

Current climate models struggle to simulate the seasonal and regional variability seen in Antarctic sea ice. Most models have biases, even in basic features such as the size and spatial patterns of the annual cycle of Antarctic sea-ice growth and retreat, or the amount of heat input to the ice from the ocean. The models fail to simulate even gross changes2, such as those driven by tropical influences on regional winds9. Because ice and climate are closely coupled, even small errors multiply quickly.

Leopard seals live, hunt and breed among the pack ice in Antarctic waters.

Features that need to be modelled more accurately include the belt of strong westerly winds that rings Antarctica, and the Amundsen Sea Low — a stormy area southwest of the Antarctic Peninsula. Models disagree, for example, on whether persistent westerly winds should increase or decrease sea-ice coverage around Antarctica. Simulations of clouds and precipitation are also inadequate. These cannot currently reproduce the correct amounts of snowfall or sea surface temperature of the Southern Ocean (the latter is widely overestimated by the models).

Climate models must also include the mixing of waters by surface winds and the impact of waves on the formation and break-up of sea ice. Precipitation and meltwater from ice sheets and icebergs influence the vertical structure of the ocean and how it holds heat, which also affects the growth and decay of sea ice. Researchers need to develop models of the atmosphere–ocean–sea-ice environment at high spatial resolution.

Explaining the recent variability in Antarctic sea ice, and improving projections of its future in a changing climate, requires projects that bridge many disciplines. For example, the research communities involved in ice-core analysis, historical data rescue and climate modelling need to collaborate to track sea-ice variability over timescales longer than the satellite record.

“Because ice and climate are closely coupled, even small errors multiply quickly.”

Some gaps in our knowledge can be filled through nationally funded research. More demanding cross-disciplinary work must be supported through international collaboration. Leading the way are organizations such as the Scientific Committee on Antarctic Research, the Scientific Committee on Oceanic Research, the World Climate Research Programme’s Climate and Cryosphere project and the Past Global Changes project. But essential work remains to be done, including: more detailed model comparisons and assessments; more research cruises; and the continuity and enhancement of satellite observing programmes relevant to sea ice. These organizations should partner with funding agencies to make that happen.

Better representations of the Southern Ocean and its sea ice must now be a priority for modelling centres, which have been focused on simulating the loss of Arctic sea ice. Such models will be crucial to the next assessment of the Intergovernmental Panel on Climate Change, which is due around 2020–21. A good example of the collaborative projects needed is the Great Antarctic Climate Hack (see go.nature.com/2ttpzcd). This brings together diverse communities with an interest in Antarctic climate to assess the performance of models.

Nature 547, 275–277 (20 July 2017) doi:10.1038/547275a

Europe's contribution to deforestation set to rise despite pledge to halt it

Europe’s consumption of products such as beef, soy and palm oil could increase its contribution to global deforestation by more than a quarter by 2030, analysis shows

Arthur Neslen in Brussels, Friday 30 June 2017 12.35 EDT, Last modified on Friday 30 June 2017 12.37 EDT

Europe’s contribution to global deforestation may rise by more than a quarter by 2030, despite a pledge to halt such practices by the end of this decade, according to a leaked draft EU analysis.

An estimated 13m hectares (Mha) of the world’s forestland is lost each year, a figure projected to spiral in the next 30 years with the Amazon, Greater Mekong and Borneo bearing the brunt of tree clearances.

But despite signing several international pledges to end deforestation by this decade’s end, more than 5Mha of extra forest land will be needed annually by 2030 to meet EU demand for agricultural products, a draft EU feasibility study predicts.

We are destroying rainforests so quickly they may be gone in 100 years

“The EU pledged to stop deforestation by 2020, not to increase it,” said Sebastian Risso, a spokesman for Greenpeace. “Deforestation is a conveyor belt to devastating climate change and species loss that the world must stop, and fast.

“It is hypocritical for Europe to portray itself as a climate leader while doing little to slow its own insatiable appetite for the fruits of destroyed forests.”

Land clearances for crop, livestock and biofuel production are by far the biggest causes of deforestation, and Europe is the end destination for nearly a quarter of such products – beef, soy, palm oil and leather – which have been cultivated on illegally deforested lands.

According to the draft EU feasibility study, which is meant to provide officials with policy options, the “embodied” deforestation rate – which is directly linked to EU consumption – will increase from between 250,000-500,000ha in 2015 to 340,000-590,000ha in 2030.

The figures do not encompass forest degradation or the impacts of lending by EU financial institutions.

The Guardian understands that the report’s methodology has also been criticised within the European commission, where some officials say that its projections are too conservative and do not account for European bioenergy demand.

A deforestation action plan that could include binding legislation is currently under consideration by the European commission.

The EU has signed several pledges to halt deforestation by 2020 within accords that include: the UN convention on biological diversity, the UN sustainable development goals, the UN forum on forests, the New York declaration on forests, and the Amsterdam declaration.

Valero refinery sues PG&E over outage that idled gas making, spewed pollution

Plumes of dark smoke flow from the Valero oil refinery in Benicia May 5 after a power outage. (Chris Riley/Times-Herald)

By Denis Cuff | dcuff@bayareanewsgroup.com | Bay Area News Group

PUBLISHED: July 3, 2017 at 3:02 pm | UPDATED: July 3, 2017 at 6:26 pm

BENICIA — The Valero oil refinery is seeking $75 million from PG&E in a federal lawsuit that blames the utility for a May 5 power outage that disrupted gasoline production for a month, damaged equipment, and showered tons of air pollution on downwind areas.

Some Benicia residents have criticized the oil refinery over pollution from hours of flaring that led to an industrial park being evacuated and an air quality agency issuing at least four violation notices against the oil company.

Valero, however, pointed the finger at PG&E, saying in a federal lawsuit that the utility abruptly pulled the plug on the refinery in a “reckless” 18 minute power failure.

PG&E cut off the flow on both of two transmission lines, one a regular power source for the refinery and the second a backup source designed to avoid shutdowns, according to the lawsuit filed Friday in U.S District court in Sacramento.

“PG&E must take responsibility for the damages it caused to ensure reckless power outages with potentially significant consequences never happen again,” Valero said in a statement.

PG&E apologized for the outage during a Benicia City Council meeting, and said it is taking measures to prevent power failures from happening again.

In response to the lawsuit, PG&E said it has commissioned an outside engineering firm, Exponent, to investigate the cause of the outage.

“We continue to partner with Valero and the city of Benicia to prevent similar power disruptions,” PG&E said. “The safety of our customers, employees and the general public is always our top priority.”

One of the transmission lines leading to Valero was taken off line on May 5 as part of planned maintenance, the utility said. For reasons being investigated, a breaker opened that morning, shutting down the backup line to Valero.

While the breaker opening is a safety measure to protect equipment, PG&E spokeswoman Deanna Contreras said the utility is trying to determine why it needed to open.

Valero said losing both transmission lines “simultaneously without any prior notice” put plant workers and neighbors at risk.

“PG&E knew or should have known that unplanned power dips or outages can present serious safety issues to refinery employees and members of the community, including the risk of explosions, fires, and serious injury,” attorneys for the oil company wrote in the lawsuit.

Valero said the outage damaged refinery equipment, and caused the refinery to lose money because it couldn’t produce gasoline and other fuels for about the month it took to make repairs and bring the refinery back into production.

Valero also said it deserves damages because its reputation was harmed.

Trump’s ‘drill, baby, drill’ energy policies are being met by ‘sue, baby, sue’

BY STUART LEAVENWORTH

sleavenworth@mcclatchydc.com

President Trump promised to grow jobs by rolling back Obama-era energy and pollution rules. And he’s fulfilling his pledge, but not how he intended. In just six months, Trump’s policies have resulted in a surge in employment — for environmental lawyers.

Since Trump took office, environmental groups and Democratic state attorneys general have filed more than four dozen lawsuits challenging his executive orders and decisions by the U.S. Environmental Protection Agency, Interior Department and other agencies. Environmental organizations are hiring extra lawyers. Federal agencies are requesting bigger legal defense budgets.

The first round of legal skirmishes has mostly gone to the environmentalists. Earlier this month, the U.S. Court of Appeals for the D.C. Circuit blocked the administration from delaying Obama-era rules aimed at reducing oil industry releases of methane, a potent greenhouse gas. On June 14, a federal judge ruled against permits issued to complete construction of the Dakota Access pipeline, a partial victory for the Standing Rock Sioux tribe, which claims its hunting and fishing grounds are threatened by the oil pipeline.

Abigail Dillen, a vice president of litigation for Earthjustice, a nonprofit organization that filed the Dakota Access lawsuit, said her group and others are girding for years of court battles over bedrock environmental laws, such as the Clean Water Act and Clean Air Act. Earthjustice has increased its staff of lawyers since November to 100, and is planning to hire another 30 in coming months.

“What we are seeing is unprecedented,” said Dillen, whose California-based organization was previously known as the Sierra Club Legal Defense Fund. “With this administration, we face a disregard to basic rules of environmental decision making.”

Trump swept to office vowing to boost coal and petroleum production, while ridding the nation of “job-killing regulations.” He’s worked to rapidly deliver on those promises. But the speed of his regulatory rollback — without a full legal team in place to vet new policies — has created a target-rich environment for litigation.

To cap off his 100 days in office, Trump signed an executive order that will expand offshore oil drilling in federal waters and open other areas that were previously off limits to new oil and gas exploration.

Altogether, environmental groups have sued the administration more than 50 times in federal court since Trump took office five months ago, according to a McClatchy review of the federal Public Access to Court Electronic Records (PACER) database. One group alone, the Center for Biological Diversity, has filed or joined in 25 suits, including ones seeking documents related to Trump’s planned border wall with Mexico and Trump’s approval of the Keystone oil pipeline.

In March, the group bragged on Twitter, “We’ve sued #Trump every week since 1/22. 3 suits today brings total to 15!”

We've sued #Trump every week since 1/22. 3 suits today brings total to 15! Help us protect the Arctic from him: https://t.co/DXFLQFkRXO pic.twitter.com/fNNcLkdu65 — Center for Bio Div (@CenterForBioDiv) May 3, 2017

During the Obama years, environmental groups also filed suits, both to toughen administration regulations and also to defend policies challenged by industry. Now, said Dillen, “the old work hasn’t gone away, and this administration has created new work for us.”

Environmental law groups have enjoyed a surge in contributions since Trump’s election, much like the ACLU and other organizations fighting Trump’s immigration policies. Earthjustice saw its fundraising increase 160 percent from the election through January, compared to the same period last year.

Federal agencies recognize they are flying into a suit storm. Several are seeking extra budget resources — or at least trying to avoid cuts — to defend the administration’s policies in court.

The Interior Department’s Office of the Solicitor, for instance, recently told Congress that it will need “substantial attorney resources” to handle an expected influx of cases. In its $65 million budget justification for Fiscal Year 2018, the solicitor’s office said: “We can also expect litigation on virtually every major permitting decision authorizing energy development on federal lands.”

It is a reasonable expectation. On Monday, Earthjustice filed suit against Interior’s attempt to delay an Obama rule limiting releases of methane from oil and gas operations on public lands. The lawsuit was filed in U.S. District Court in California on behalf of tribal groups and several environmental organizations including the Sierra Club, Natural Resources Defense Council and Wilderness Society.

Interior also faces lawsuits on plans to open tens of thousands of acres of public land to coal industry leasing, and the proposed removal of bans on new offshore oil drilling in the Arctic and Atlantic oceans.

The suits are piling up at the EPA, too. Environmentalists are suing over the agency’s refusal to ban a controversial pesticide, chlorpyrifos; its delay of rules aimed at preventing chemical plants accidents; and its delay on implementing rules on coal plant discharges into public waters.

During the Obama administration, industry groups and Republican state attorneys general engaged in their own litigation storm. As the top legal officer of Oklahoma, Scott Pruitt solicited industry contributions to build up the war chest of the Republican Attorneys General Association. He also filed or helped file 14 lawsuits against the EPA. In January, Trump tapped Pruitt to serve as EPA administrator, putting him in a position to unravel Obama-era policies he previously had litigated against.

Democratic attorneys general are now ramping up their own fundraising and are filing or joining in suits against Trump’s environmental, immigration and education policies. AGs such as California’s Xavier Becerra, Pennsylvania’s Josh Shapiro and Washington state’s Bob Ferguson are all raising their profiles with these lawsuits. All are seen as contenders for future seats as governor or U.S. senator.

The litigation surge is not going unnoticed by congressional Republicans. They’ve restarted a campaign to counter what they call “abusive” use of the courts by environmentalists and other groups. On June 28, the GOP-controlled House held a hearing focusing on lawsuits against the Interior Department, which manages the nation’s vast public lands.

“A legal sub industry has thrived from endless environmental litigation, while burdening the livelihoods of countless citizens,” said U.S. Rep. Mike Johnson, R-LA, vice chair of House Natural Resources Subcommittee on Oversight and Investigations. Excessive litigation, he added, “drains our taxpayer dollars away from good stewardship of our natural resources.”

At the EPA, Pruitt finds himself in the awkward position of wishing the courts would dismiss litigation he filed against the agency while serving as Oklahoma attorney general. One of those suits was against Obama’s Clean Power Plan, which would set the nation’s first limits on carbon emissions from power plants.

In March, Trump’s Justice Department asked the U.S. Court of Appeals in the D.C. Circuit to indefinitely suspend the lawsuit Pruitt had filed. Apparently, the administration was concerned that if the litigation continued, the court could rule in favor of the Clean Power Plan, forcing the White House to implement a climate initiative it opposes.

In a partial victory for the administration, the appeals court in April put the litigation on hold, and asked for more information before it decides on the case going forward. Depending on the court’s final ruling, the White House will likely be allowed to go ahead with full revocation of the regulations.

Revocation of the Clean Power Plan, however, could prompt another lawsuit. Environmental groups contend the EPA is obligated to address climate pollution under the Clean Air Act.

McClatchy’s Ben Wieder contributed to this story.

Flood insurance rates could rise for hundreds of thousands of homeowners under proposal

BY STUART LEAVENWORTH

sleavenworth@mcclatchydc.com

Congress is considering sweeping changes to the debt-laden National Flood Insurance Program that could jack up flood insurance rates for hundreds of thousands of homeowners under a bill that a Florida real estate group called “devastating.”

The proposal, part of a flood-insurance package with a Sept. 30 deadline, could prove costly to homeowners in flood-prone regions ranging from Florida to Texas to California’s Central Valley. It would primarily affect homeowners with low “grandfathered” rates based on flood maps that have changed since they purchased their homes.

“This could be devastating to so many people,” said Maria S. Wells, president of the Florida Realtors trade group, which has 175,000 members statewide. “People don’t realize they could be remapped into a much more high-risk zone.”

Rep. Sean Duffy, a Wisconsin Republican who introduced the 21st Century Flood Reform Act, agreed the change could deal a financial blow to some homeowners, over time. But the current practice, he argued, is unfair to households that live outside of floodplains. Through their tax dollars, they subsidize a relatively small number of homeowners who own property near the coasts, rivers and other waterways.

Duffy said he’s open to amending his legislation to address affordability concerns — but not if it keeps the current subsidy system in place.

“We are not talking about the poorest people in the country,” said Duffy, who represents a wide swath of northern Wisconsin. “You have subsidies going to these wealthy homeowners on the coast.”

Congress has until the end of September to reauthorize the National Flood Insurance Program, otherwise the program will lapse, preventing people from buying flood insurance. When that impasse last happened for a month in 2010, an estimated 46,800 home sales transactions were interrupted or canceled, according to the National Realtors Association.

Last month, a key House committee approved seven pieces of legislation aimed at improving the solvency of the insurance program, which owes $25 billion to the U.S. Treasury.

The full House is expected to consolidate and vote on the whole package — including the part that would increase insurance rates — before the August recess, but affordability may hamper its passage. If the House and Senate can’t pass and reconcile their two reauthorization bills, Congress could miss the Sept. 30 deadline, and the housing market could see a repeat of 2010.

WE NEED TO GENTLY MOVE THESE PEOPLE TO THE POINT THEY ARE PAYING PREMIUMS COMMENSURATE WITH THE RISK

Sean Duffy, R-Wisconsin, author of debated flood insurance bill

At issue is the insurance program’s practice of “grandfathering” premiums, which allows homeowners to continue paying relatively low insurance rates even when the Federal Emergency Management Agency, or FEMA, changes its maps to account for newly discovered risks. The practice is based on the concept that people with homes built to the required codes at the time of purchase shouldn’t be penalized when those codes change.

Under Duffy’s bill, as now written, no new grandfathering would be allowed after four years, and premiums on existing grandfathered policies would rise 8 percent yearly until they reach full-risk rates.

No one knows for sure how many households could be affected by the change, but Duffy said FEMA has told him it could number 500,000 or higher. The increased premium costs could be sizable.

For example, FEMA’s rate tables show that a home in an “A Zone” of Special Flood Hazard Area — typically near a lake, river or coastline — that now costs $3,000 a year in insurance premiums could rise to $5,000 a year if FEMA determined that expected flood elevations were two feet higher than previously mapped.

Wells, a real estate broker with offices in Florida and Pennsylvania, said one frequent criticism of the program — that rich people benefit disproportionately — is a myth. Under the program, flood damage claims are capped at $250,000, much less than full replacement for a wealthy person’s home.

“People talk about beach-side mansions, but I can tell you, they don’t use NFIP,” she said. “They use Lloyds of London.”

Proponents of the legislation, however, say that construction of ever-more-valuable homes along the coast and on floodplains adds to the risks being covered by the taxpayer-subsidized flood insurance program. Subsidies, they argue, encourage more people to build and buy homes on floodplains, perpetuating the program’s debt.

“We need to transition to sound actuarial rates,” said Rep. Jeb Hensarling, R-Tex., chairman of the House Financial Services Committee, which approved the legislative package last month. “Otherwise, we are putting more people in harm’s way.”

Federal flood insurance has long been the subject of a congressional tug-of-war, especially following Hurricane Katrina in 2005 and Hurricane Sandy in 2012. In 2005, FEMA borrowed more than $17 billion from the U.S. Treasury to pay claims damages, and Hurricane Sandy triggered another $9 billion in borrowing.

As of late 2016, close to five million households nationwide held federal flood insurance policies, according to FEMA. Florida was the largest, with nearly 1.8 million policy holders, followed by Texas with 608,000, Louisiana with 472,000 and California with 295,000.

The issue is complicated by the fact that many U.S. cities were originally built along coastlines and rivers and are now deemed vulnerable to extreme flooding. Scientists say that with sea level rise and stronger storms expected from global climate change, the U.S. Treasury is to sure to take future big hits from flood disasters.

To reduce the mounting debt, Congress in 2012 passed the Biggert-Waters Flood Insurance Reform Act, which was “designed to allow premiums to rise to reflect the true risk of living in high-flood areas.” But two years later, following an outcry from affected homeowners, Congress scaled back the premium rate increases.

After President Trump took office, federal debt hawks saw a chance to revive provisions in the 2012 legislation, including ending “grandfathering.” That provision is part of the 21st Century Flood Reform Act introduced by Duffy.

Some of the House package has wide support from consumer, environmental and taxpayer groups. These include provisions aimed at removing obstacles for homeowners to purchase private flood insurance, improving the FEMA claims process, discouraging new development in floodplains and helping communities “mitigate” existing risks, such as elevating structures above expected flood depths.

There’s also agreement that FEMA needs to update its process for revising flood maps, although that goal could be complicated by Trump’s proposed 2018 budget, which calls for cuts to the agency’s mapping program.

Officials with the National Realtors Association fear the House will approve legislation to phase out grandfathering with no solid information on how many households would be affected and at what cost.

“We know for sure that homeowners will be hit hard if the NFIP is allowed to expire in September, but we need better data from FEMA before we can have a fully-informed discussion on flood insurance,” William E. Brown, president of the National Realtors Association, said in a statement to McClatchy. Without data, he added, “determining costs and risks both for homeowners and for taxpayers [is] tremendously difficult.”

Duffy, a former prosecutor and ESPN commentator, said his legislation is based on the principle that people should approach home purchases knowing that insurance conditions could change. He likened it to a consumer buying car insurance, knowing that premiums could go up if a teenager is added to the policy.

But Duffy also said he’s aware his bill faces opposition and he’s working to address the concerns about grandfathering.

“I am not a purist on this,” he said. “I want to get a bill that can pass.”

Nearly 200 environmental defenders murdered last year, says report

By National Observer in News, Energy, Politics | July 13th 2017

Environmental protesters are increasingly being treated like criminals and slaughtered when they are defending land from the encroachment of pipelines, mining, and other resource development, says a new report released on Thursday.

This revelation was included in a sweeping report by Global Witness on the state of “environmental defenders” in 2016—a year marked, for example, by ongoing protests of oil and gas pipelines in North America and many other forms of environmental protest around the world.

Criminalizing environmental protesters, the report argues, means people who oppose development projects face “trumped-up and aggressive criminal and civil cases.”

This translates to “silencing dissent,” scaring people, "tarnish[ing] their reputations," and wasting court resources, the report said.

Though the report’s authors found criminalization was particularly prevalent in Latin America, they highlight Canada’s work on ushering in new anti-terror legislation and noted National Observer reporter Bruce Livesey's coverage of RCMP surveillance of Indigenous and environmental activists.

"In Canada, environmental groups and First Nation Peoples fear new anti-terrorism legislation will be used to step up the surveillance of protesters opposed to oil and mining projects," the report said. "The Canadian media has also reported on several government agencies that are systematically spying on environmental organizations."

The group calls for governments to reform laws “that unjustly target environmental activism and the right to protest,” and urged companies to “stop abusing the judicial process to silence defenders.”

Global Witness also calls on governments to ensure activists “can peacefully voice their opinions without facing arrest, and are guaranteed due process when charges are brought against them.”

Global Witness is an international advocacy organization that bills itself as investigating—and making public—links between resource extraction, conflict, and corruption. The not-for-profit research group is based in London, UK.

The bulk of its new report deals with the slayings of environmental activists.

They found nearly 200 people were murdered all over the world last year in connection to mining, logging, and other extraction efforts, or while protecting national parks.

"Almost 1,000 murders have been recorded by Global Witness since 2010, with many more facing threats, attacks, harassment, stigmatization, surveillance and arrest," the report said. "Clearly governments are failing to protect activists, while a lack of accountability leaves the door open to further attacks. By backing extractive and infrastructure projects imposed on local communities without their consent, governments, companies and investors are complicit in this crisis."

Worst offenders :

Brazil = 49

Columbia = 37

Philipines = 28

India = 16

Deforestation soars in Colombia after Farc rebels' demobilization

Area of deforestation climbed 44% in 2016 compared with year before, as criminal groups have swooped in promote illegal logging and mining

Sibylla Brodzinsky in Bogotá, Tuesday 11 July 2017 05.00 EDT, Last modified on Tuesday 11 July 2017 17.00 EDT

Colombia has seen an alarming surge in deforestation after the leftwing rebels relinquished control over vast areas of the country as a part of a historic peace deal.

The area of deforestation jumped 44% in 2016 to 178,597 hectares (690 sq miles) compared to the year before, according to official figures released this month – and most of the destruction was in remote rainforest areas once controlled by the Revolutionary Armed Forces of Colombia (Farc).

The rebel army was a violent illegal armed group, but for decades the guerrillas enforced strict limits on logging by civilians – in part to protect their cover from air raids by government warplanes.

But last year, as the Farc began to move toward demobilization, criminal groups moved in, taking advantage of the vacuum left behind to promote illegal logging and mining and cattle ranching. Civilians who had been ordered by the Farc to maintain 20% of their land with forest cover began expanding their farms.

“The Farc would limit logging to two hectares a year in the municipality,” said Jaime Pacheco, mayor of the town of Uribe, in eastern Meta province. “In one week [last year], 100 hectares were cleared and there is little we can do about it.”

Over their 53-year existence, the Farc were far from environmental angels. While in some areas the guerrilla presence helped maintain the forests, in others the rebels promoted clear-cuts to make way for the planning of coca, the raw material used in cocaine, or illegal gold mining, both a source of income for the group.

Bombings of oil pipelines dumped millions of gallons of oil in waterways and jungles. Between 1991 and 2013, 58% of the deforestation in Colombia was seen in conflict areas, according to a 2016 report by Colombia’s planning ministry.

“They weren’t environmentalists but they did regulate activity, and – since they had the guns – people complied,” says Susana Muhamad, an environmental activist who conducted a diagnostic study of the environmental risks of the rebel retreat.

“We told the government that it would need to establish control in these areas quickly, but it hasn’t,” she said. “It’s like the wild west now, a land rush.”

Colombia, which has the world’s eighth-largest forest cover, committed under the Paris climate accords to reaching zero net deforestation by 2020 and halting the loss of all natural forest by 2030.

Norway is donating about $3.5m over two years to a pilot project that hopes to stem deforestation by offering paid jobs to former Farc fighters and communities to safeguard forests by tracking and reporting illegal logging, adopting sustainable farming methods and undertaking eco-tourism projects.

“We hope this project can be the way for more activities whereby peace comes with green dividends,” said Vidar Helgesen, Norway’s environment minister, on a recent visit to Colombia.

That is the kind of incentive farmers in the town of Uribe, once a Farc stronghold, say they need to keep the forest on their land intact. “We are willing to leave the forests if we get some sort of subsidy to do it,” one farmer, Noel Segura, said in a phone interview.

Wendy Arenas, of the government’s post-conflict department, said the new deforestation figures were an alert. “We knew it could happen,” she told el Espectador newspaper. “It is no secret that the zones left behind by the Farc are territories where other actors are taking over and profiting from logging.”

Solar Power Gets $46 Million Boost From U.S. Energy Department

By Chris Martin, July 12, 2017

Technology grants awarded to 48 universities and developers

Arizona State leads awards with new module testing methods

The U.S. Energy Department awarded $46.2 million in research grants to improve solar energy technologies and reduce costs to 3 cents per kilowatt-hour by 2030.

The money will be partly matched by the 48 projects awarded to laboratories and universities, including Arizona State, which plans to use $1.6 million to develop an X-ray test to evaluate the performance of thin-film modules under harsh conditions, according to a emailed statement Wednesday.

“These projects ensure there’s a pipeline of knowledge, human resources, transformative technology solutions and research to support the industry,” Charlie Gay, director of the Energy Department’s SunShot Initiative, said in the statement.

Other awards include:

$1.37 million to Stanford University to improve a perovskite over silicon module design

$1.13 million to Colorado State University to improve thin-film manufacturing

$2 million to SolarReserve Inc. to reduce costs of molten salt storage

Swiss firm Climeworks begins sucking carbon dioxide from the atmosphere in fight against climate change

Company's 'super ambitious' plan is to capture one per cent of annual emissions by 2025

Ian Johnston Environment Correspondent

Friday 23 June 2017 11:47 BST

Humans have pumped vast amounts of carbon into the atmosphere, but we could soon be removing significant amounts Corbis

A Swiss company has opened what is believed to be the world’s first ‘commercial’ plant that sucks carbon dioxide from the atmosphere, a process that could help reduce global warming.

Climeworks, which created the plant in Hinwil, near Zurich, told the Carbon Brief website that the process was currently not making money.

However the firm expressed confidence they could bring down the cost from $600 per tonne of the greenhouse gas to $200 in three to five years with a longer term target of $100.

And its “super-ambitious” vision is to capture one per cent of annual global carbon emissions by 2025, which would involve building hundreds of thousands of the devices.

The captured gas is currently being sold, appropriately enough, to a greenhouse that grows fruit and vegetables.

However it could also be used to make fizzy drinks and renewable fuels or stored underground.

One of the company’s founders, Christoph Gebald, told Carbon Brief: “With this plant, we can show costs of roughly $600 per tonne, which is, of course, if we compare it to a market price, very high.

“But, if we compare it to studies which have been done previously, projecting the costs of direct air capture, it’s a sensation.”

Carbon maps reveal those causing the most and least climate change

Previous research into the idea had assumed a cost of $1,000 per tonne.

“We are very confident that, once we build version two, three and four of this plant, we can bring down costs,” Mr Gebald said.

“We see a factor [of] three cost reduction in the next three to five years, so a final cost of $200 per tonne. The long-term target price for what we do is clearly $100 per tonne of CO2.”

He said such a carbon capture system could start to make an impact on emissions on a global scale, but this might require a price to be put on carbon emissions. The European Union and some other countries around the world have put a price on carbon for some major emitters, but environmentalists have complained it is too low and fails to reflect the true cost.

“The vision of our company is to capture [one] per cent of global emissions by 2025, which is super ambitious, but which is something that is feasible,” Mr Gebald said.

“Reaching one per cent of global emissions by 2025 is currently not possible without political will, without a price on carbon, for example. So it’s not possible by commercial means only.”

Want to fight climate change? Have fewer children

Next best actions are selling your car, avoiding flights and going vegetarian, according to study into true impacts of different green lifestyle choices

Damian Carrington Environment editor

Wednesday 12 July 2017 00.45 EDT, Last modified on Wednesday 12 July 2017 13.34 EDT

The greatest impact individuals can have in fighting climate change is to have one fewer child, according to a new study that identifies the most effective ways people can cut their carbon emissions.

The next best actions are selling your car, avoiding long flights, and eating a vegetarian diet. These reduce emissions many times more than common green activities, such as recycling, using low energy light bulbs or drying washing on a line. However, the high impact actions are rarely mentioned in government advice and school textbooks, researchers found.

Carbon emissions must fall to two tonnes of CO2 per person by 2050 to avoid severe global warming, but in the US and Australia emissions are currently 16 tonnes per person and in the UK seven tonnes. “That’s obviously a really big change and we wanted to show that individuals have an opportunity to be a part of that,” said Kimberly Nicholas, at Lund University in Sweden and one of the research team.

The new study, published in Environmental Research Letters, sets out the impact of different actions on a comparable basis. By far the biggest ultimate impact is having one fewer child, which the researchers calculated equated to a reduction of 58 tonnes of CO2 for each year of a parent’s life.

The figure was calculated by totting up the emissions of the child and all their descendants, then dividing this total by the parent’s lifespan. Each parent was ascribed 50% of the child’s emissions, 25% of their grandchildren’s emissions and so on.

“We recognise these are deeply personal choices. But we can’t ignore the climate effect our lifestyle actually has,” said Nicholas. “It is our job as scientists to honestly report the data. Like a doctor who sees the patient is in poor health and might not like the message ‘smoking is bad for you’, we are forced to confront the fact that current emission levels are really bad for the planet and human society.”

“In life, there are many values on which people make decisions and carbon is only one of them,” she added. “I don’t have children, but it is a choice I am considering and discussing with my fiance. Because we care so much about climate change that will certainly be one factor we consider in the decision, but it won’t be the only one.”

Overpopulation has been a controversial factor in the climate change debate, with some pointing out that an American is responsible for 40 times the emissions produced by a Bangladeshi and that overconsumption is the crucial issue. The new research comes a day after researchers blamed overpopulation and overconsumption on the “biological annihilation” of wildlife which has started a mass extinction of species on the planet.

Nicholas said that many of the choices had positive effects as well, such as a healthier diet, as meat consumption in developed countries is about five times higher than recommended by health authorities. Cleaner transport also cuts air pollution, and walking and cycling can reduce obesity. “It is not a sacrifice message,” she said. “It is trying to find ways to live a good life in a way that leaves a good atmosphere for the planet. I’ve found it really positive to make many of these changes.”

The researchers analysed dozens of sources from Europe, North America and Japan to calculate the carbon savings individuals in richer nations can make. They found getting rid of a car saved 2.4 tonnes a year, avoiding a return transatlantic flight saved 1.6 tonnes and becoming vegetarian saved 0.8 tonnes a year.

These actions saved the same carbon whichever country an individual lived in, but others varied. The savings from switching to an electric car depend on how green electricity generation is, meaning big savings can be made in Australia but the savings in Belgium are six times lower. Switching your home energy supplier to a green energy company also varied, depending on whether the green energy displaces fossil fuel energy or not.

Nicholas said the low-impact actions, such as recycling, were still worth doing: “All of those are good things to do. But they are more of a beginning than an end. They are certainly not sufficient to tackle the scale of the climate challenge that we face.”

The researchers found that government advice in the US, Canada, EU and Australia rarely mentioned the high impact actions, with only the EU citing eating less meat and only Australia citing living without a car. None mentioned having one fewer child. In an analysis of school textbooks on Canada only 4% of the recommendations were high impact.

Chris Goodall, an author on low carbon living and energy, said: “The paper usefully reminds us what matters in the fight against global warming. But in some ways it will just reinforce the suspicion of the political right that the threat of climate change is simply a cover for reducing people’s freedom to live as they want.

“Population reduction would probably reduce carbon emissions but we have many other tools for getting global warming under control,” he said. “Perhaps more importantly, cutting the number of people on the planet will take hundreds of years. Emissions reduction needs to start now.”

Full Report

http://iopscience.iop.org/article/10.1088/1748-9326/aa7541/pdf

OIRA works quietly on updating social cost of carbon

Hannah Hess, E&E News reporter, Greenwire: Thursday, June 15, 2017

Some federal employees are still working on the social cost of carbon.

Not far from the White House, some of the federal government's most influential number crunchers are still working on the social cost of carbon.

President Trump's executive order on energy independence effectively signaled "pencils down" on federal work to estimate the monetary damage of greenhouse gas emissions, disbanding the interagency working group that calculated the dollar value on the effect of greenhouse gas emissions on the planet and society.

But his order didn't eliminate the metric entirely.

The Office of Information and Regulatory Affairs' Jim Laity, a career staffer who leads the Natural Resources and Environment Branch, said yesterday his office is "actively working on thinking about the guidance" Trump gave in March.

With the Trump agenda focused on regulatory rollback, federal agencies haven't yet issued rules that require valuations of carbon emissions, "although they are working on something coming in the not-too-distant future," Laity told an audience attending the National Academy of Sciences' seminar on valuing climate change impacts.

Employees from U.S. EPA, the Interior Department and the Department of Energy were in the crowd, along with academics and prominent Washington think tank scholars.

Greens fear the Trump administration could wipe out the social cost of carbon valuation, currently set around $40 per metric ton of carbon dioxide, as part of what they portray as a war on climate science. Revisions to the White House estimates have raised hackles among Republicans and conservatives, who allege they are part of a secret power grab.

Laity would play a key role in that effort as top overseer of energy and environmental rules.

Addressing the summit via webcast yesterday, Laity walked the audience through a decade of actions related to the calculation and influential feedback, including from a 13-member panel of the National Academies of Sciences, Engineering and Medicine. He stressed the outcome is still uncertain.

The Trump administration "looked at the work we had done, and they looked at the criticisms, and they decided we needed to have a pause to kind of rethink what we were doing a little bit in this area," Laity said, previewing the executive order that was praised by the oil and gas industry.

The metric flew under the radar during President Obama's first term. However, a significant jump in values in 2013 set the stage for a clash between then-OIRA Director Howard Shelanski and Republican lawmakers who alleged the estimate was the product of a closed-door process (E&E Daily, July 19, 2013).

A few months after the hearing, OIRA announced it would provide opportunity for public comment on the estimate. Critics took issue with the discount rate the government used to account for damage and expressed concern that the finding, though based on peer-reviewed literature, had not been peer-reviewed.

"I think we also realized that we needed to pay more attention to some of these issues than maybe we had in the past," Laity said.

OIRA received 140 unique comments, and some 39,000 letters, which Laity said mostly supported having some kind of cost-benefit analysis of the cost to society of each ton of emissions in property damage, health care costs, lost agricultural output and other expenses.

"We took some of those concerns to heart," he said.

In July 2015, the White House slightly revised its estimate. It also announced that the executive branch would seek independent expert advice from the National Academies to inform yet another revision of the estimate, though federal agencies would continue to use the current figure in their rulemakings until that revision could be made (Greenwire, July 6).

Part one of the two-phase study, released in January 2016, blessed the figure. It found the White House did not need to review its estimates in the short term. Part two, released a few weeks before Obama left office, recommended a new framework for making the estimate and more research into long-term climate damage.

Within three months, the Trump administration formally disbanded the interagency working group that would have implemented those recommendations. The executive order also formally withdrew the technical support documents OIRA had used in the past.

The think tank Resources for the Future recently announced the start of an initiative focused on responding to the scientists' recommendations (Greenwire, June 9).

Guidance for OIRA

Laity noted that the next paragraph of Trump's executive order acknowledged agencies would need to continue monetizing greenhouse gas emissions damage in their regulations, to the extent that the rules affect emissions.

As an interim measure, the order directed OIRA to offer some guidance about how to do that, directing that any values that were used would be consistent with the guidance of a document published by the Office of Management and Budget in September 2003, Circular A-4.

There are two main areas of concern: the discount rate and the global focus of the current estimate.

"A-4 has very specific advice," Laity explained. The document says pretty unequivocally that the main factor in weighing regulations should be cost and benefits to the U.S. If a regulation does have significant impacts outside the U.S. that are important to consider, these should be "clearly segregated out and reported separately," he said.

Economists clashed during a recent hearing of the House Science, Space and Technology Committee on how best to estimate the figure.

Former Obama administration official Michael Greenstone, who attended yesterday's summit and gave a presentation, argues the benefits of emission reductions play out in part in international politics.

Greenstone, chief economist for the Council of Economic Advisers in 2009 and 2010, has warned that reverting to a social cost of carbon that only considers domestic benefits of emission reductions is "essentially asking the rest of the world to ramp up their emissions" (E&E Daily, March 1).

Said Laity: "All I can say right now is that we are actively working on thinking about the guidance that we have been given in the new executive order and, sort of technically, how best to implement that in upcoming ... rulemaking."

Aging Oil Pipeline Under the Great Lakes Should Be Closed, Michigan AG Says

Enbridge’s Line 5 lies exposed on the floor of the Straits of Mackinac and has been losing chunks of its outer coating. A new report suggests ways to deal with it.

Nicholas Kusnetz

BY NICHOLAS KUSNETZ, JUN 30, 2017

Enbridge's Line 5 carries 540,000 barrels of oil per day from Canada to Ohio, going under the Straits of Mackinac, where Lake Michigan and Lake Huron meet.

Michigan Attorney General Bill Schuette called for a deadline to close a controversial portion of an oil pipeline that runs along the bottom of the Straits of Mackinac, a channel that connects two of the Great Lakes. The pipeline has had more than two dozen leaks over its lifespan, and parts of its outer coating have come off.

The announcement came as the state released a report looking at alternatives for that section of the Enbridge pipeline, called Line 5.

The report's suggestions include drilling a tunnel under the straits for a new line, selecting an alternate route or using rail cars to transport the oil instead. It also left open the possibility that the existing pipeline could continue to operate indefinitely.

"The Attorney General strongly disagrees" with allowing the existing pipeline to continue operating, said a statement released by Schuette's office on Thursday. "A specific and definite timetable to close Line 5 under the straits should be established."

Schuette did not, however, specify when that deadline should be, or how it should be set.

For years, environmentalists and a local Indian tribe have been calling for the closure of this short stretch of the pipeline. Built in 1953, it sits exposed above the lakebed where Lake Huron meets Lake Michigan. Earlier this year, Enbridge acknowledged that an outer coating had fallen off of the line in places, and it has sprung at least 29 leaks in its 64-year history. The 645-mile line carries about 540,000 barrels per day of light crude, including synthetic crude from Canada's tar sands, as well as natural gas liquids, from Superior, Wisconsin, to Sarnia, Ontario.

Schuette, a Republican, had said before that this section of the line should close eventually, but he hasn't taken any action to hasten a closure. Advocacy groups have asked the state to revoke Enbridge's easement to pass through the straits.

"It's great that he's reasserting his commitment to shut down Line 5," said Mike Shriberg, Great Lakes executive director for the National Wildlife Federation. "The question now is, is there enough evidence for him to take action right away."

The state had commissioned two studies on the line to be paid for by Enbridge, one that was released yesterday and another that was to produce a risk analysis for the pipeline. Last week, however, the state cancelled the risk analysis after discovering that someone who had contributed to it had subsequently done work for Enbridge.

Michael Barnes, an Enbridge spokesman, said the company would need time to review the report before giving specific comments, but that it "remains committed to protecting the Great Lakes and meeting the energy needs of Michigan through the safe operation of Line 5."

Shriberg said that now that the report on alternatives is out, it's time for the state to act.

"Ultimately, the attorney general and the governor have a decision to make," he said. "They've been saying for years that they've been waiting for the full information to come in."

California invested heavily in solar power. Now there's so much that other states are sometimes paid to take it

By IVAN PENN on JUNE 22, 2017 for LA Times

On 14 days during March, Arizona utilities got a gift from California: free solar power.

Well, actually better than free. California produced so much solar power on those days that it paid Arizona to take excess electricity its residents weren’t using to avoid overloading its own power lines.

It happened on eight days in January and nine in February as well. All told, those transactions helped save Arizona electricity customers millions of dollars this year, though grid operators declined to say exactly how much. And California also has paid other states to take power.

The number of days that California dumped its unused solar electricity would have been even higher if the state hadn’t ordered some solar plants to reduce production — even as natural gas power plants, which contribute to greenhouse gas emissions, continued generating electricity.

Solar and wind power production was curtailed a relatively small amount — about 3% in the first quarter of 2017 — but that’s more than double the same period last year. And the surge in solar power could push the number even higher in the future.

Why doesn’t California, a champion of renewable energy, use all the solar power it can generate?

The answer, in part, is that the state has achieved dramatic success in increasing renewable energy production in recent years. But it also reflects sharp conflicts among major energy players in the state over the best way to weave these new electricity sources into a system still dominated by fossil-fuel-generated power.

City officials and builders in Redondo Beach want a mixed-use development to replace the current natural gas facility. They say there is no need to overhaul the power plant when there is an abundance of clean alternatives. (Rick Loomis/Los Angeles Times)

No single entity is in charge of energy policy in California. This has led to a two-track approach that has created an ever-increasing glut of power and is proving costly for electricity users. Rates have risen faster here than in the rest of the U.S., and Californians now pay about 50% more than the national average.

Perhaps the most glaring example: The California Legislature has mandated that one-half of the state’s electricity come from renewable sources by 2030; today it’s about one-fourth. That goal once was considered wildly optimistic. But solar panels have become much more efficient and less expensive. So solar power is now often the same price or cheaper than most other types of electricity, and production has soared so much that the target now looks laughably easy to achieve.

At the same time, however, state regulators — who act independently of the Legislature — until recently have continued to greenlight utility company proposals to build more natural gas power plants.

State Senate Leader Kevin de Leon (D-Los Angeles) wants Calilfornia to produce 100% of its electricity from clean energy sources such as solar and wind by 2045.

These conflicting energy agendas have frustrated state Senate Leader Kevin de Leon (D-Los Angeles), who opposes more fossil fuel plants. He has introduced legislation that would require the state to meet its goal of 50% of its electricity from renewable sources five years earlier, by 2025. Even more ambitiously, he recently proposed legislation to require 100% of the state’s power to come from renewable energy sources by 2045.

“I want to make sure we don’t have two different pathways,” de Leon said. Expanding clean energy production and also building natural gas plants, he added, is “a bad investment.”

Environmental groups are even more critical. They contend that building more fossil fuel plants at the same time that solar production is being curtailed shows that utilities — with the support of regulators — are putting higher profits ahead of reducing greenhouse gas emissions.

“California and others have just been getting it wrong,” said Leia Guccione, an expert in renewable energy at the Rocky Mountain Institute in Colorado, a clean power advocate. “The way [utilities] earn revenue is building stuff. When they see a need, they are perversely [incentivized] to come up with a solution like a gas plant.”

California and others have just been getting it wrong.

— Leia Guccione, renewable energy expert at the Rocky Mountain Institute

Regulators and utility officials dispute this view. They assert that the transition from fossil fuel power to renewable energy is complicated and that overlap is unavoidable.

They note that electricity demand fluctuates — it is higher in summer in California, because of air conditioning, and lower in the winter — so some production capacity inevitably will be underused in the winter. Moreover, the solar power supply fluctuates as well. It peaks at midday, when the sunlight is strongest. Even then it isn’t totally reliable.

Because no one can be sure when clouds might block sunshine during the day, fossil fuel electricity is needed to fill the gaps. Utility officials note that solar production is often cut back first because starting and stopping natural gas plants is costlier and more difficult than shutting down solar panels.

In the Mojave Desert at the California/Nevada border, the Ivanpah Solar Electric Generating System uses 347,000 garage-door-sized mirrors to heat water that powers steam generators. This solar thermal plant — one of the clean energy facilities that helps produce 10% of the state’s electricity. (Mark Boster / Los Angeles Times)

Eventually, unnecessary redundancy of electricity from renewables and fossil fuel will disappear, regulators, utilities and operators of the electric grid say.

“The gas-fired generation overall will show decline,” said Neil Millar, executive director of infrastructure at CAISO, the California Independent System Operator, which runs the electric grid and shares responsibility for preventing blackouts and brownouts. “Right now, as the new generation is coming online and the older generation hasn’t left yet, there is a bit of overlap.”

The Los Angeles Times has been telling fact from fiction since 1881. Support local investigative reporting like this story by subscribing today. Start getting full access to our signature journalism for just 99 cents for the first eight weeks.

Utility critics acknowledge these complexities. But they counter that utilities and regulators have been slow to grasp how rapidly technology is transforming the business. A building slowdown is long overdue, they argue.

Despite a growing glut of power, however, authorities only recently agreed to put on hold proposals for some of the new natural gas power plants that utilities want to build to reconsider whether they are needed.

A key question in the debate is when California will be able to rely on renewable power for most or all of its needs and safely phase out fossil fuel plants, which regulators are studying.

The answer depends in large part on how fast battery storage improves, so it is cheaper and can store power closer to customers for use when the sun isn’t shining. Solar proponents say the technology is advancing rapidly, making reliance on renewables possible far sooner than previously predicted, perhaps two decades or even less from now — which means little need for new power plants with a life span of 30 to 40 years.

Calibrating this correctly is crucial to controlling electricity costs.

“It’s not the renewables that’s the problem. It’s the state’s renewable policy that’s the problem,” said Gary Ackerman, president of the Western Power Trading Forum, an association of independent power producers. “We’re curtailing renewable energy in the summertime months. In the spring, we have to give people money to take it off our hands.”

Not long ago, solar was barely a rounding error for California’s energy producers.

In 2010, power plants in the state generated just over 15% of their electricity production from renewable sources. But that was mostly wind and geothermal power, with only a scant 0.5% from solar. Now that overall amount has grown to 27%, with solar power accounting for 10%, or most of the increase. The solar figure doesn’t include the hundreds of thousands of rooftop solar systems that produce an additional 4 percentage points, a share that is ever growing.

The share of the state’s power generated by solar utilities and rooftop panels has skyrocketed in recent years.

Note: Rooftop panels were not tracked by the federal government prior to 2014.

2016 Solar utilities: 9.6% / Rooftop panels: 4.2%

Behind the rapid expansion of solar power: its plummeting price, which makes it highly competitive with other electricity sources. In part that stems from subsidies, but much of the decline comes from the sharp drop in the cost of making solar panels and their increased efficiency in converting sunlight into electricity.

The average cost of solar power for residential, commercial and utility-scale projects declined 73% between 2010 and 2016. Solar electricity now costs 5 to 6 cents per kilowatt-hour — the amount needed to light a 100-watt bulb for 10 hours — to produce, or about the same as electricity produced by a natural gas plant and half the cost of a nuclear facility, according to the U.S. Energy Information Administration.

Fly over the Carrizo Plain in California’s Central Valley near San Luis Obispo and you’ll see that what was once barren land is now a sprawling solar farm, with panels covering more than seven square miles — one of the world’s largest clean-energy projects. When the sun shines over the Topaz Solar Farm, the shimmering panels produce enough electricity to power all of the residential homes in a city the size of Long Beach, population 475,000.

The Topaz Solar Farm, one of the world’s largest solar plants, blankets the Carrizo Plain in the Central Valley. It supplies electricity to Pacific Gas & Electric Co. (NASA)

Other large-scale solar operations blanket swaths of the Mojave Desert, which has increasingly become a sun-soaking energy hub. The Beacon solar project covers nearly two square miles and the Ivanpah plant covers about five and a half square miles.

The state’s three big shareholder-owned utilities now count themselves among the biggest solar power producers. Southern California Edison produces or buys more than 7% of its electricity from solar generators, Pacific Gas & Electric 13% and San Diego Gas & Electric 22%.

Similarly, fly over any sizable city and you’ll see warehouses, businesses and parking lots with rooftop solar installations, and many homes as well.

With a glut of solar power at times, CAISO has two main options to avoid a system overload: order some solar and wind farms to temporarily halt operations or divert the excess power to other states.

That’s because too much electricity can overload the transmission system and result in power outages, just as too little can. Complicating matters is that even when CAISO requires large-scale solar plants to shut off panels, it can’t control solar rooftop installations that are churning out electricity.

CAISO is being forced to juggle this surplus more and more.

In 2015, solar and wind production were curtailed about 15% of the time on average during a 24-hour period. That rose to 21% in 2016 and 31% in the first few months of this year. The surge in solar production accounts for most of this, though heavy rainfall has increased hydroelectric power production in the state this year, adding to the surplus of renewables.

California’s clean energy supply is growing so fast that solar and wind producers are increasingly being ordered to halt production.

Even when solar production is curtailed, the state can produce more than it uses, because it is difficult to calibrate supply and demand precisely. As more homeowners install rooftop solar, for example, their panels can send more electricity to the grid than anticipated on some days, while the state’s overall power usage might fall below what was expected.

This means that CAISO increasingly has excess solar and wind power it can send to Arizona, Nevada and other states.

When those states need more electricity than they are producing, they pay California for the power. But California has excess power on a growing number of days when neighboring states don’t need it, so California has to pay them to take it. CAISO calls that “negative pricing.”

Why does California have to pay rather than simply give the power away free?

When there isn’t demand for all the power the state is producing, CAISO needs to quickly sell the excess to avoid overloading the electricity grid, which can cause blackouts. Basic economics kick in. Oversupply causes prices to fall, even below zero. That’s because Arizona has to curtail its own sources of electricity to take California’s power when it doesn’t really need it, which can cost money. So Arizona will use power from California at times like this only if it has an economic incentive — which means being paid.

In the first two months of this year, CAISO paid to send excess power to other states seven times more often than same period in 2014. “Negative pricing” happened in an average of 18% of all sales, versus about 2.5% in the same period in 2014.

Most “negative pricing” typically has occurred for relatively short periods at midday, when solar production is highest.

But what happened in March shows how the growing supply of solar power could have a much greater impact in the future. The periods of “negative pricing” lasted longer than in the past — often for six hours at a time, and once for eight hours, according to a CAISO report.

The excess power problem will ease somewhat in the summer, when electricity usage is about 50% higher in California than in the winter.

But CAISO concedes that curtailments and “negative pricing” is likely to happen even more often in the future as solar power production continues to grow, unless action is taken to better manage the excess electricity.

The sprawling Ivanpah Solar Electric Generating System, owned by NRG Energy and BrightSource Energy, occupies 5.5 square miles in the Mojave Desert. The plant can supply electricity to 180,000 Pacific Gas & Electric and Southern California Edison customers. (Mark Boster/Los Angeles Times)

Arizona’s largest utility, Arizona Public Service, is one of the biggest beneficiaries of California’s largesse because it is next door and the power can easily be sent there on transmission lines.

On days that Arizona is paid to take California’s excess solar power, Arizona Public Service says it has cut its own solar generation rather than fossil fuel power. So California’s excess solar isn’t reducing greenhouse gases when that happens.

CAISO says it does not calculate how much it has paid others so far this year to take excess electricity. But its recent oversupply report indicated that it frequently paid buyers as much as $25 per megawatt-hour to get them to take excess power, according to the Energy Information Administration.

That’s a good deal for Arizona, which uses what it is paid by California to reduce its own customers’ electricity bills. Utility buyers typically pay an average of $14 to $45 per megawatt-hour for electricity when there isn’t a surplus from high solar power production.

With solar power surging so much that it is sometimes curtailed, does California need to spend $6 billion to $8 billion to build or refurbish eight natural gas power plants that have received preliminary approval from regulators, especially as legislative leaders want to accelerate the move away from fossil fuel energy?

The answer depends on whom you ask.

Utilities have repeatedly said yes. State regulators have agreed until now, approving almost all proposals for new power plants. But this month, citing the growing electricity surplus, regulators announced plans to put on hold the earlier approvals of four of the eight plants to determine if they really are needed.

Big utilities continue to push for all of the plants, maintaining that building natural gas plants doesn’t conflict with expanding solar power. They say both paths are necessary to ensure that California has reliable sources of power — wherever and whenever it is needed.

The biggest industrial solar power plants, they note, produce electricity in the desert, in some cases hundreds of miles from population centers where most power is used.

At times of peak demand, transmission lines can get congested, like Los Angeles highways. That’s why CAISO, utilities and regulators argue that new natural gas plants are needed closer to big cities. In addition, they say, the state needs ample electricity sources when the sun isn’t shining and the wind isn’t blowing enough.

Utility critics agree that some redundancy is needed to guarantee reliability, but they contend that the state already has more than enough.

California has so much surplus electricity that existing power plants run, on average, at slightly less than one-third of capacity. And some plants are being closed decades earlier than planned.

As for congestion, critics note that the state already is crisscrossed with an extensive network of transmission lines. Building more plants and transmission lines wouldn’t make the power system much more reliable, but would mean higher profits for utilities, critics say.

That is what the debate is about, said Jaleh Firooz, a power industry consultant who previously worked as an engineer for San Diego Gas & Electric for 24 years and helped in the formation of CAISO.

“They have the lopsided incentive of building more,” she said.

Jaleh Firooz, who worked 24 years as an engineer for San Diego Gas & Electric Co., says utilities seeking higher profits “have the lopsided incentive of building more” power plants and transmission lines. (Robert Gauthier/Los Angeles Times)

The reason: Once state regulators approve new plants or transmission lines, the cost is now built into the amount that the utility can charge electricity users — no matter how much or how little it is used.

Given that technology is rapidly tilting the competitive advantage toward solar power, there are less expensive and cleaner ways to make the transition toward renewable energy, she said.

To buttress her argument, Firooz pointed to a battle in recent years over a natural gas plant in Redondo Beach.

Independent power producer AES Southland in 2012 proposed replacing an aging facility there with a new one. The estimated cost: $250 million to $275 million, an amount that customers would pay off with higher electricity bills.

CAISO and Southern California Edison, which was going to buy power from the new plant, supported it as necessary to protect against potential power interruptions. Though solar and wind power production was increasing, they said those sources couldn’t be counted on because their production is variable, not constant.

The California Public Utilities Commission approved the project, agreeing that it was needed to meet the long-term electricity needs in the L.A. area.

The Los Angeles Times has been telling fact from fiction since 1881. Support local investigative reporting like this story by subscribing today. Start getting full access to our signature journalism for just 99 cents for the first eight weeks.

But the California Coastal Conservancy, a conservation group opposed to the plant, commissioned an analysis by Firooz to determine how vital it was. Her conclusion: not at all.

Firooz calculated that the L.A. region already had excess power production capacity — even without the new plant — at least through 2020.

Along with the cushion, her report found, a combination of improved energy efficiency, local solar production, storage and other planning strategies would be more than sufficient to handle the area’s power needs even as the population grew.

She questioned utility arguments.

“In their assumptions, the amount of capacity they give to the solar is way, way undercut because they have to say, ‘What if it’s cloudy? What if the wind is not blowing?’ ” Firooz explained. “That’s how the game is played. You build these scenarios so that it basically justifies what you want.”

In their assumptions, the amount of capacity they give to the solar is way, way undercut because they have to say, ‘What if it’s cloudy?’

— Jaleh Firooz, power-industry consultant

Undeterred, AES Southland pressed forward with its proposal. In 2013, Firooz updated her analysis at the request of the city of Redondo Beach, which was skeptical that a new plant was needed. Her findings remained the same.

Nonetheless, the state Public Utilities Commission approved the project in March 2014 on the grounds that it was needed. But the California Energy Commission, another regulatory agency whose approval for new plants is required along with the PUC’s, sided with the critics. In November 2015 it suspended the project, effectively killing it.

Asked about the plant, AES said it followed the appropriate processes in seeking approval. It declined to say whether it still thinks that a new plant is needed.

The existing facility is expected to close in 2020.

A March 2017 state report showed why critics are confident that the area will be fine without a new plant: The need for power from Redondo Beach’s existing four natural gas units has been so low, the state found, that the units have operated at less than 5% of their capacity during the last four years.

Exxon Makes a Biofuel Breakthrough

By Jennifer A Dlouhy, June 19, 2017

J. Craig Venter’s Synthetic Genomics teamed with Exxon Mobil

Technique could lead to commercialization of algae-based fuels

It’s the holy grail for biofuel developers hoping to coax energy out of algae: Keep the organism fat enough to produce oil but spry enough to grow quickly.

J. Craig Venter, the scientist who mapped the human genome, just helped Exxon Mobil Corp. strike that balance, with a breakthrough that could enable widespread commercialization of algae-based biofuels. Exxon and Venter’s Synthetic Genomics Inc. are announcing the development at a conference in San Diego on Monday.

They used advanced cell engineering to more than double the fatty lipids inside a strain of algae. The technique may be replicated to boost numbers on other species too.

"Tackling the inner workings of algae cells has not been trivial," Venter said. "Nobody’s really ever been there before; there’s no guideline to go by."

Venter, who co-founded Synthetic Genomics and sequenced the human genome in the 1990s, says the development is a significant advancement in the quest to make algae a renewable energy source. The discovery is being published in the July issue of the journal Nature Biotechnology.

Prior story: Exxon’s Algae Venture May Take Quarter Century to Yield Fuel

It’s taken eight years of what Venter called tedious research to reach this point.

When Exxon Mobil announced its $600 million collaboration with Synthetic Genomics in 2009, the oil company predicted it might yield algae-based biofuels within a decade. Four years later, Exxon executives conceded a better estimate might be within a generation.

Developing strains that reproduce and generate enough of the raw material to supply a refinery meant the venture might not succeed for at least another 25 years, former chief executive and current U.S. Secretary of State, Rex Tillerson said at the time.

Even with this newest discovery, commercialization of this kind of modified algae is decades away.

Venter says the effort has "been a real slog."

"It’s to the team’s credit -- it’s to Exxon’s credit -- that they believed the steps in the learning were actually leading some place," he said. "And they have."

The companies forged on -- renewing their joint research agreement in January amid promising laboratory results.

Exxon declined to disclose how much the Irving, Texas-based company has invested in the endeavor so far. Vijay Swarup, a vice president at ExxonMobil Research and Engineering Co., says the collaboration is part of the company’s broad pursuit of "more efficient ways to produce the energy and chemicals" the world needs and "mitigate the impacts of climate change."

Carbon Consumer

Where Exxon’s chief products -- oil and natural gas -- generate carbon dioxide emissions that drive the phenomenon, algae is a CO2 consumer, Swarup said.

Most renewable fuels today are made from plant material, including corn, corn waste and soybean oil. Algae has long been considered a potentially more sustainable option; unlike those traditional biofuels, it can grow in salt water and thrive under harsh environmental conditions. And the oil contained in algae potentially could be processed in conventional refineries.

The Exxon and Synthetic Genomics team found a way to regulate the expression of genes controlling the accumulation of lipids, or fats, in the algae -- and then use it to double the strain’s lipid productivity while retaining its ability to grow.

"To my knowledge, no other group has achieved this level of lipid production by modifying algae, and there’s no algae in production that has anything like this level," Venter said in a telephone interview. It’s "our first super-strong indication that there is a path to getting to where we need to go."

Nitrogen Starved

They searched for the needed genetic regulators after observing what happened when cells were starved of nitrogen -- a tactic that generally drives more oil accumulation. Using the CRISPR-Cas9 gene-editing technique, the researchers were able to winnow a list of about 20 candidates to a single regulator -- they call it ZnCys -- and then to modulate its expression.

Test strains were grown under conditions mimicking an average spring day in southern California.

Rob Brown, Ph.D., senior director of genome engineering at Synthetic Genomics, likened the tactic to forcing an agile algae athlete to sit on the bench.

"We basically take an athlete and make them sit on a couch and get fat," Brown said. "That’s the switch — you grab this guy off the track and you put him on a couch and he turns into a couch potato. So everything he had in his body that was muscle, sinew, carbohydrates -- we basically turn that into a butterball. That’s what we’re basically doing with this system.”

Without the change, most algae growing in this environment would produce about 10 to 15 percent oil. The Exxon and Synthetic Genomics collaboration yielded a strain with more than 40 percent.

Venter, who is also working on human longevity research, views the development as a significant step toward the sustainable energy he believes humans need as they live longer, healthier lives. The study also is proof, he says, that "persistence pays."

"You have to believe in what you’re doing and that where you’re headed really is the right direction," he said, "and sometimes, like this, it takes a long time to really prove it.”

Hundreds of scientists call for caution on anti-microbial chemical use

More than 200 scientists outline a broad range of concerns for triclosan and triclocarban and call for reduced use worldwide

June 20, 2017, By Brian Bienkowski, Environmental Health News

Two ingredients used in thousands of products to kill bacteria, fungi and viruses linger in the environment and pose a risk to human health, according to a statement released today by more than 200 scientists and health professionals.

The scientists say the possible benefits in most uses of triclosan and triclocarban—used in some soaps, toothpastes, detergents, paints, carpets—are not worth the risk. The statement, published today in the Environmental Health Perspectives journal, urges “the international community to limit the production and use of triclosan and triclocarban and to question the use of other antimicrobials.”

“Triclosan and triclocarban have been permitted for years without definitive proof they’re providing benefits.” -Avery Lindeman, Green Policy Institute

They also call for warning labels on any product containing triclosan and triclocarban and for bolstered research of the chemicals' environmental toll.

The statement says evidence that the compounds are accumulating in water, land, wildlife and humans is sufficient to merit action.

“We want to draw attention to the increasing use of antimicrobials in a lot of products,” said Dr. Ted Schettler, science director of the Science and Environmental Health Network and one of the signatories of the statement. “Triclosan and triclocarban rose to the top because they’ve been in use for so long and exposures are so widespread.”

The chemicals are used to kills microbes such as bacteria and viruses that make people ill. However, both chemicals affect animals’ hormone systems, causing reproductive and development problems. And there is nascent evidence that the impacts may extend to humans as well—having been linked to reduced growth of fetuses, earlier births, and lower head circumference in boys at birth.

The compounds are used in an estimated 2,000 products but are being phased out of some uses. In February the EU banned triclosan in hygiene products. U.S. manufacturers are phasing out triclosan from hand soaps after the Food and Drug Administration banned it last year amid concerns that the compound disrupted the body's hormone systems.

The FDA noted in the restriction that antibacterial hand soaps were no more effective than non-antibacterial soap and water at preventing illness.

“Triclosan and triclocarban have been permitted for years without definitive proof they’re providing benefits,” said Avery Lindeman, deputy director of the Green Policy Institute and one of the signatories of the statement. The compounds, she added, serve as little more than a “marketing ploy” for many products, such as antimicrobial cutting boards and socks.

Despite soap bans, triclosan remains in Colgate Total toothpaste, some cleaning products and cosmetics. More worrisome, Lindeman said some manufactures of personal care products are simply substituting other antimicrobials for triclosan—some of which may pose the same risks to people and the environment.

Triclosan and triclocarban also show up in odd places, such as building products, Schettler said. “Some building materials are subject to microbial degradation, attacked by things like fungi that break them down, so manufacturers will put antimicrobials in there to reduce the risk,” he said.

Because of the widespread use, most people have some levels of triclosan in them. A 2008 study of U.S. residents found it in the urine of about 75 percent of people tested.

Once the compounds get into the environment, they don’t readily go away. Researchers have detected triclosan and triclocarban in water and sediment all over the world—including drinking water, oceans and streams. The U.S. Geological Survey found triclosan in 60 percent of U.S. streams. Studies have shown triclosan toxic to some algae, fish and other crustaceans.

The compounds impact hormones in animal studies. And there’s evidence that they may do the same to developing babies. Properly functioning hormones are critical for babies’ proper development. Last month Brown University researchers reported that mothers’ triclosan exposure during pregnancy was linked to lower birth weights, smaller heads and earlier births. They also found that as the children aged, triclosan levels spiked after they brush their teeth or wash their hands.

In addition to endocrine disruption concerns, Lindeman and other signers outline two other potential human health impacts from exposure to triclosan: heightened sensitivity to allergens, and antibiotic resistance.

Large studies of children in the United States and Norway have linked triclosan to allergies and worsening asthma. And there is evidence bacteria that develop resistance to triclosan also become resistant to other antibacterial compounds.

Companies such as Colgate have defended triclosan use. The American Chemistry Council, which represents chemical manufacturers, declined to comment on the statement.

The authors of the statement recognize the need for antimicrobials and didn’t call for a total ban, Schettler said. Toothpastes with triclosan can help people with gum disease and in hospitals it is crucial to have such germ-killing soaps for pre-surgery and to use around people with immune system problems, he said.

But the everyday use could be curbed, he added, and the hope is the statement starts a broader conversation.

“Whether it will have an influence on the policy level remains to be seen, but a lot can come together through consumer interest and concerns causing manufacturers to slowly shift directions,” Schettler said.

Hygiene leaves kids with loads of triclosan

Seemingly healthy actions like brushing teeth and washing hands leave kids with increasing levels of a controversial endocrine disrupter. Are the anti-bacterial benefits worth the risk?

June 1, 2017, By Brian Bienkowski, Environmental Health News

Levels of a controversial chemical meant to kill bacteria spike in the bodies of young children after they brush their teeth or wash their hands, according to a new study.

U.S. manufacturers are phasing triclosan out of hand soaps after the Food and Drug Administration banned it effective last year amid concerns that the compound disrupted the body's hormone systems. It remains in Colgate Total toothpaste, some cleaning products and cosmetics. Health experts say exposure is best avoided for babies in the womb and developing children.

The latest study, published in the journal Environmental Science & Technology, is one of the first to show that children’s levels rise through their first few years of life. Hand washing and teeth brushing have speedy, significant impact on levels, the researchers found.

“There’s very little data on the exposure in those first years of life,” said the senior author of the study, Joe Braun, an assistant professor and researcher at Brown University. “There are a lot of behavioral changes in the those years, and environmental chemicals can play a role.”

Braun and colleagues tested the urine of 389 mothers and their children from Cincinnati, collecting samples from the women three times during pregnancy and from the children periodically between 1 and 8 years old.

They found triclosan in more than 70 percent of the samples. Among 8 year olds, levels were 66 percent higher in those that used hand soap. And more washing left the children with higher loads—those who reported washing their hands more than five times per day had more than four times the triclosan concentrations than those washing once or less per day.

Children who had brushed their teeth within the last day had levels 2.5 times higher than those who had a toothpaste-free 24-hour span.

“It’s a thorough, well-done analysis,” said Isaac Pessah, a researcher and professor at University of California, Davis, who was not involved in the study. “Given the high concentrations [of triclosan] in personal care products, you’re seeing that the concentrations in the end user are also quite high.”

Braun said the levels of triclosan rose as the children aged, eventually leveling off. “Their levels were almost to moms’ levels by the time they reached 5 to 8 years of age.”

"There is evidence that triclosan is positive for dental health but if it's not recommended from your dentist that you use this product, then don't."-Joe Braun, Brown University

This, he said, is likely due to more frequent use of personal care products as the kids aged. Despite the hand soap ban, triclosan remains on the market because it is effective at fighting plaque and gingivitis.

Colgate uses 0.3 percent of the antibacterial to “fight harmful plaque germs.”

Colgate’s parent company, Colgate-Palmolive, did not respond to requests to comment on Braun’s study. The company has, however, maintained triclosan’s safety in its toothpaste and its role in fighting plaque and gum disease.

“As for claims of endocrine disruption, the World Health Organization defines an endocrine disruptor as something harmful to one’s health, and that is simply not the case here,” wrote Colgate’s head of R&D, Patricia Verduin, in response to media reports on triclosan’s endocrine impacts.

Braun, however, said there is “quite compelling” evidence from animal studies that triclosan decreases thyroid hormone levels. Properly functioning thyroid hormones are critical for brain development.

Just last month, using the same mothers and children, Braun and others reported that mothers’ triclosan exposure during pregnancy was linked to lower birth weights, smaller heads and earlier births.

In addition, Pessah and colleagues reported triclosan hinders proper muscle development. The researchers used mice and fish, finding that triclosan affects the process responsible for muscle contraction.

Braun said the study was limited in that they didn’t know what brands of toothpaste or soap kids were using. Also, product use was self-reported and mothers may not want to say their kids weren’t brushing their teeth, he said.

Researchers do not argue triclosan’s effectiveness at reducing bacteria. And Pessah said there is a role in protecting oral health. “Perhaps if we would have made triclosan available by prescription for some people, like those with secondary infection on their gums from diabetes, we wouldn’t reach the contamination levels we now see.”

Braun said people might take pause next time they’re browsing for bathroom and cleaning products.

“There is evidence that triclosan is positive for dental health, but if it’s not recommended from your dentist that you use this product, then don’t,” Braun said. “Especially if you’re a young child or pregnant woman.”

In heart of Southwest, natural gas leaks fuel a methane menace

By Tom Knudson and Gayathri Vaidyanathan / June 14, 2017

BLANCO, N.M. – Most evenings, the quiet is almost intoxicating.

The whoosh of the wind through the junipers, the whinny of horses in their stalls, the raspy squawking of ravens – those are the sounds Don and Jane Schreiber have grown to love on their remote Devil’s Spring Ranch.

The views are mesmerizing, too. Long, lonesome ridges of khaki-colored rocks, dome-like outcrops and distant mesas rise from a sea of sage and rabbitbrush.

The (Un) Scientific Method

Big Oil lauds Paris pullout but warns of rising seas, severe storms

The ranch and surrounding countryside are a surprising setting for an enduring climate change problem: a huge cloud of methane – a potent, heat-trapping gas – that is 10 times larger than the city of Chicago. The main sources, scientists say, are leaks from about 25,000 active and abandoned wells and 10,000 miles of pipelines that snake across the San Juan Basin, providing about $3 billion worth of natural gas per year.

Home to stunning prehistoric ruins, cathedral-like canyons and silt-laden rivers, this basin in the Four Corners region of the Southwest emits substantially more methane per unit of energy produced than most major gas-producing areas, according to a Reveal from The Center for Investigative Reporting analysis of industry data reported to the federal government.

ConocoPhillips has been the region’s largest source of methane emissions, mostly from thousands of gas wells drilled on public lands managed by the Department of Interior. BP America is the second largest source.

“This is the Land of Enchantment. I love this state,” said Don Schreiber, whose ranch and 5,760 acres of nearby public grazing land are dotted with more than 120 natural gas wells. “But it is a black mark on our reputation that we allow this to persist.”

Every year, an estimated half million tons of methane loft into the air over the basin, a geologic area located primarily in northwestern New Mexico, according to a recent study published in the journal Environmental Science & Technology. That’s equivalent to the annual greenhouse gases emissions of almost 3 million cars, “a substantial amount of methane emitted from one region,” said Eric Kort, an atmospheric scientist at the University of Michigan and co-author of the study.

Although methane accounts for only 9 percent of U.S. greenhouse gases, it traps heat far more effectively than carbon dioxide. That’s why controlling it is considered essential to addressing climate change.

And while methane isn’t harmful to human health, other gases, such as benzene, that come from the wells are linked to cancer and other effects. Studies have linked the proximity of gas wells to a higher risk of birth defects in Colorado and increased hospital visits and respiratory and skin problems in Pennsylvania.

The San Juan Basin and Four Corners region – where Utah, Colorado, New Mexico and Arizona meet – are home to large expanses of Native American lands, including the 27,400-square-mile Navajo Nation, where some residents say the gases are making them sick, too.

“I feel nausea sometimes. I feel weak,” said Sam Dee, a 59-year-old rodeo announcer on the Navajo reservation in Utah just outside the basin. Near his home are gas and oil plants where pollution is flared into the air, contributing to the Four Corners methane cloud. “I am concerned for my grandkids, for their health and their climate.”

EPA suspends methane rules

Cleaner-burning than coal and oil, natural gas is touted as a steppingstone to a more climate-friendly future. Getting it out of the ground, though, isn’t easy. Imagine a pressure cooker filled with mud, water, crude oil, natural gas and other substances. That pressure cooker is the earth. When companies sink wells, the natural gas – which is mostly methane – comes hurtling out along with everything else.

Inevitably leaks happen. In addition, large amounts of gas are vented and flared into the air intentionally to relieve pressure and force water out of wells.

Controlling methane was a priority of the Obama administration in an effort to combat climate change and air pollution. But President Donald Trump has moved quickly to suspend, and perhaps rescind, rules he considers burdensome to the energy industry.

In May 2016, the Environmental Protection Agency adopted a regulation aimed at reducing emissions from new and modified wells on private land by a half million tons by 2025. A few months later, oil and gas groups sued the agency to block the rule, calling it “excessive, uneconomic and threatening to the long-term production of oil and natural gas.”

Two months after taking office, Trump issued an executive order directing federal agencies to reconsider restrictions on energy development. In response, EPA Administrator Scott Pruitt suspended the methane regulation and launched a review. He also ended an effort to gather information about leaks from old wells.

Pruitt, who was Oklahoma’s attorney general for six years, has close ties to energy companies. Oil and gas companies donated more than $100,000 over the past two years to a political action committee tied to Pruitt, according to Federal Election Commission records. Documents and emails also show that Pruitt has had a long-time alliance with Devon Energy, the San Juan Basin’s second-largest oil and gas producer. Devon attorneys drafted a letter that Pruitt, as attorney general, sent to President Barack Obama’s EPA to challenge the methane rules.

Another Obama administration regulation targeted methane emissions from wells on public lands. But the Interior Department announced last month that it would “suspend, revise or rescind (the rule) given its significant regulatory burden that encumbers American energy production, economic growth and job creation.”

Nationally, added oil and gas companies would have to spend between $125 million and $161 million a year to reduce leaks and flaring on public lands, although the recovered gas would have offset much of that, according to the Bureau of Land Management.

Tracking down leaks

A massive plume of methane (brownish area at center) has been discovered near the Four Corners region that scientists attribute to natural gas production. The map above shows 2012 methane emissions from natural gas and oil production, the most recent data available.

A massive plume of methane (brownish area at center) has been discovered near the Four Corners region that scientists attribute to natural gas production. The map above shows 2012 methane emissions from natural gas and oil production, the most recent data available.

Sources: Gridded National Inventory of U.S. Methane Emissions, Stamen DesignCredit: Eric Sagara/Reveal

Methane is a stealth pollutant. You can’t see it. At Four Corners, the first hint of trouble came from space in 2003, when a spectrometer on a satellite detected an enormous cloud of methane northeast of the Grand Canyon, stretching across 2,500 square miles. Each time the satellite orbited the Earth, the cloud was there.

In 2014, scientists identified the oil and gas fields of the San Juan Basin as the possible source. Energy companies pushed back; there are other sources, they pointed out, including natural seeps. But the scientists weren’t finished. They mounted a device on a plane that could spot individual leaks. They drove to many sites and recorded videos.

Of 250 plumes, four were from natural geologic seeps. One was from a coal mine. Everything else was from oil and gas drilling.

The companies maintain that these studies are too narrow in scope to be conclusive.

Although the San Juan Basin produces just under 4 percent of the nation’s natural gas, it emitted 10 percent of all methane pollution from oil and gas operations in 2015, according to industry data. One reason why: The region is larger than most national parks. Reaching wells and pipelines to repair leaks takes time. A lot can go wrong: Gaskets freeze, valves malfunction and welds fail, according to reports filed with the New Mexico Oil Conservation Division.

Some venting and flaring is unavoidable. But around 40 percent could be captured “with currently available control technologies,” the U.S. Government Accountability Office reported in 2010.

To the north, in Wyoming, where state officials have enacted methane rules, the upper Green River Basin produces twice as much gas as the San Juan Basin but emits only about half as much methane, according to Jon Goldstein, senior energy policy manager at the Environmental Defense Fund.

New Mexico has no state regulation.

“We live under the largest methane cloud in the entire country thanks to over 35,000 wells that are just across the line in New Mexico,” said Gwen Lachelt, a La Plata County, Colorado, commissioner who lobbied in the U.S. Senate to retain the Bureau of Land Management’s methane measure for public land.

Dirtier here than elsewhere

Every year oil and gas companies report methane emissions and production data to the federal government. That information, obtained and analyzed by Reveal, identifies the polluters.

ConocoPhillips, the basin’s largest oil and gas producer with about 10,000 wells, emitted methane in 2015 at a rate more than three times higher than its rate for the rest of the country.

For BP America, its methane emissions rate in the basin was 17 times higher than its national rate. For Devon Energy, it was twice as high.

“The industry there is dirtier than the industry in the country as a whole,” said David McCabe, an atmospheric scientist with the nonprofit environmental group Clean Air Task Force. “You need standards so the public can have some assurance companies will take care of this.”

ConocoPhillips spokesman Daren Beaudo defended its San Juan Basin operations, which it sold in April to Texas-based Hilcorp Energy Co. and The Carlyle Group in Washington, D.C., for $3 billion. Nationwide, the company nearly halved the rate at which it emitted methane between 2012 and 2015, according to the data analyzed by Reveal.

“We’re concerned about emissions and we’ve been taking action to reduce them for more than eight years,” Beaudo said. “With a large number of operating wells, even small emissions add up.”

Living with gas wells

Few people are more familiar with the environmental costs of gas wells than Don and Jane Schreiber. Several wells are actually on their land courtesy of a federal “split estate” law that allows companies access to private land.

They’ve battled over road construction, dust, soil erosion, hazardous waste disposal and loss of wildlife habitat. “Never have I said, ‘Don’t drill,’” said Don Schreiber. “What I’ve said is, ‘Don’t drill this way.’ ”

“Natural gas is a fine fuel,” he added. “But it is finite. That we would allow ourselves to waste it, cause local health problems and global climate change, there is everything in the world wrong with that.”

Not long ago, he drove around the countryside in his pickup, pointing out drilling sites and scars. “We’re down to fighting for the air we breathe,” he said.

Driving past one well on public land, he stepped on the brake. Under cumulus clouds as white as pearls and surrounded by pinyon pine, juniper and sagebrush, the setting was majestic. The air smelled like a gas station.

Fungal toxins easily become airborne, creating potential indoor health risk

American Society for Microbiology

Washington, DC - June 23, 2017 - Toxins produced by three different species of fungus growing indoors on wallpaper may become aerosolized, and easily inhaled. The findings, which likely have implications for "sick building syndrome," were published in Applied and Environmental Microbiology, a journal of the American Society for Microbiology.

"We demonstrated that mycotoxins could be transferred from a moldy material to air, under conditions that may be encountered in buildings," said corresponding author Jean-Denis Bailly, DVM, PhD, Professor of Food Hygiene, National Veterinary School of Toulouse, France. "Thus, mycotoxins can be inhaled and should be investigated as parameters of indoor air quality, especially in homes with visible fungal contamination."

The impetus for the study was the dearth of data on the health risk from mycotoxins produced by fungi growing indoors. (image: microscopic view of a sporulating Aspergillus, showing numerous light spores that can be easily aerosolized and inhaled together with mycotoxins. credit: Sylviane Bailly.)

In the study, the investigators built an experimental bench that can simulate an airflow over a piece of contaminated wall paper, controlling speed and direction of the air. Then they analyzed the resulting bioaerosol.

"Most of the airborne toxins are likely to be located on fungal spores, but we also demonstrated that part of the toxic load was found on very small particles -- dust or tiny fragments of wallpaper, that could be easily inhaled," said Bailly..

The researchers used three fungal species in their study: Penicillium brevicompactum, Aspergillus versicolor, and Stachybotrys chartarum. These species, long studied as sources of food contaminants, also "are frequent indoor contaminants," said Bailly. He noted that they produce different mycotoxins, and their mycelia are different from one another, likely leading to differences in the quantity of mycotoxins they loft into the air. (Mycelia are the thread-like projections of fungi that seek nutrition and water from the environment.)

The findings raised two new scientific questions, said Bailly. First, "There is almost no data on toxicity of mycotoxins following inhalation," he said, noting that most research has focused on such toxins as food contaminants.

Second, the different fungal species put different quantities of mycotoxins in the air, "probably related to mycelium organization," but also possibly related to the mechanisms by which mycotoxins from different fungi become airborne -- for example via droplets of exudate versus accumulation in spores. Such knowledge could help in prioritizing those species that may be of real importance in wafting mycotoxins, he said.

Bailly noted that the push for increasingly energy efficient homes may aggravate the problem of mycotoxins indoors. Such homes "are strongly isolated from the outside to save energy," but various water-using appliances such as coffee makers "could lead to favorable conditions for fungal growth," he said.

"The presence of mycotoxins in indoors should be taken into consideration as an important parameter of air quality," Bailly concluded.

The American Society for Microbiology is the largest single life science society, composed of over 50,000 scientists and health professionals. ASM's mission is to promote and advance the microbial sciences.

How the climate can rapidly change at tipping points

A new study shows: Gradual changes in the atmospheric CO2 concentration can induce abrupt climate changes

Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research

During the last glacial period, within only a few decades the influence of atmospheric CO2 on the North Atlantic circulation resulted in temperature increases of up to 10 degrees Celsius in Greenland -- as indicated by new climate calculations from researchers at the Alfred Wegener Institute and the University of Cardiff. Their study is the first to confirm that there have been situations in our planet's history in which gradually rising CO2 concentrations have set off abrupt changes in ocean circulation and climate at "tipping points". These sudden changes, referred to as Dansgaard-Oeschger events, have been observed in ice cores collected in Greenland. The results of the study have just been released in the journal Nature Geoscience.

Previous glacial periods were characterised by several abrupt climate changes in the high latitudes of the Northern Hemisphere. However, the cause of these past phenomena remains unclear. In an attempt to better grasp the role of CO2 in this context, scientists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) recently conducted a series of experiments using a coupled atmosphere-ocean-sea ice model.

First author Xu Zhang explains: "With this study, we've managed to show for the first time how gradual increases of CO2 triggered rapid warming." This temperature rise is the result of interactions between ocean currents and the atmosphere, which the scientists used the climate model to explore. According to their findings, the increased CO2 intensifies the trade winds over Central America, as the eastern Pacific is warmed more than the western Atlantic. This is turn produces increased moisture transport from the Atlantic, and with it, an increase in the salinity and density of the surface water. Finally, these changes lead to an abrupt amplification of the large-scale overturning circulation in the Atlantic. "Our simulations indicate that even small changes in the CO2 concentration suffice to change the circulation pattern, which can end in sudden temperature increases," says Zhang.

Further, the study's authors reveal that rising CO2 levels are the dominant cause of changed ocean currents during the transitions between glacial and interglacial periods. As climate researcher Gerrit Lohmann explains, "We can't say for certain whether rising CO2 levels will produce similar effects in the future, because the framework conditions today differ from those in a glacial period. That being said, we've now confirmed that there have definitely been abrupt climate changes in the Earth's past that were the result of continually rising CO2 concentrations."

Cut US commercial building energy use 29 percent with widespread controls

Programming, maintaining building controls can lower national power bill

DOE/Pacific Northwest National Laboratory

RICHLAND, Wash. - Like driving a car despite a glowing check-engine light, large buildings often chug along without maintenance being performed on the building controls designed to keep them running smoothly.

And sometimes those controls aren't used to their full potential, similar to a car at high speed in first gear. Instead of an expensive visit to the mechanic, the result for a commercial building is a high power bill.

A new report finds that if commercial buildings fully used controls nationwide, the U.S. could slash its energy consumption by the equivalent of what is currently used by 12 to 15 million Americans.

The report examines how 34 different energy efficiency measures, most of which rely on various building controls, could affect energy use in commercial buildings such as stores, offices and schools. Researchers at the Department of Energy's Pacific Northwest National Laboratory found the measures could cut annual commercial building energy use by an average of 29 percent. This would result in between 4 to 5 quadrillion British Thermal Units in national energy savings, which is about 4 to 5 percent of the energy consumed nationwide.

"Most large commercial buildings are already equipped with building automation systems that deploy controls to manage building energy use," said report co-author and PNNL engineer Srinivas Katipamula. "But those controls often aren't properly programmed and are allowed to deteriorate over time, creating unnecessarily large power bills.

"Our research found significant nationwide energy savings are possible if all U.S. commercial building owners periodically looked for and corrected operational problems such as air-conditioning systems running too long."

An easy, low-cost fix

The report offers the first detailed, national benefit analysis of multiple energy efficiency measures to address building operational problems. Many of these problems can be corrected with very little effort. Unlike other practices that require expensive new technologies, most of the measures evaluated improve energy efficiency by enabling already-installed equipment to work better.

Roughly 20 percent of America's total energy use goes toward powering commercial buildings. And about 15 percent of U.S. commercial buildings have building automation systems that deploy controls, such as sensors that turn on lights or heating a room only when it's occupied. As a result, helping commercial buildings better use their controls could profoundly slash America's overall energy consumption.

Katipamula and his colleagues examined the potential impact of 34 individual energy efficiency measures that can improve commercial building performance, including:

Fixing broken sensors that read temperatures and other measurements

Turning off power-using devices like printers and monitors when a room isn't occupied

Dimming lights in areas with natural lighting

Because combining individual measures can increase energy savings, the researchers also estimated the impacts of packaging energy efficiency measures together. PNNL designed packages of combined measures based on the needs of three different building conditions: buildings already efficient and with little room for improvement, inefficient buildings with a lot of room for improvement, and typical buildings in the middle.

PNNL used computer models of nine prototypical commercial buildings, and extrapolated them to represent five other, similar buildings so it could evaluate energy use in a total of 14 building types. The research team used these prototypical building models with DOE's EnergyPlus building software, which calculated potential energy use given local weather and whichever energy efficiency measures were applied.

Of the individual efficiency measures studied, those with the greatest energy-saving potential nationwide were:

Lowering daytime temperature setpoints for heating, increasing them for cooling, and lowering nighttime heating setpoints: about 8 percent reduction

Reducing the minimum rate for air to flow through a variable-air volume boxes: about 7 percent reduction

Limiting heating and cooling to when building is most likely to be occupied: about 6 percent reduction

Though the study found all commercial buildings across all climates could have an average total energy savings of 29 percent, some building types were found to have the potential to save more, such as:

Secondary schools: about 49 percent

Standalone retail stores & auto dealerships: about 41 percent

As expected, researchers found inefficient buildings have the greatest potential to save energy. After estimating how common each building condition is in the U.S., researchers found combined efficiency measure packages have the following potential national energy saving ranges:

Inefficient buildings: 30 to 59 percent

Typical buildings: 26 to 56 percent

Efficient buildings: 4 to 19 percent

The Department of Energy's Office of Energy Efficiency and Renewable Energy funded this research.

REFERENCE: N. Fernandez, S. Katipamula, W. Wang, Y. Xie, M. Zhao, C. Corgin, "Impacts of Commercial Building Controls on Energy Savings and Peak Load Reduction," PNNL report to DOE, May 2017, http://buildingretuning.pnnl.gov/publications/PNNL-25985.pdf.

Luxury-quality materials made from waste

By Laura Houseley, CNN

Updated 5:31 AM ET, Thu June 22, 2017

Recycling gets a designer makeover

(CNN)Recycling is a concept as old as trash itself. By now, we're used to seeing useful materials, such as glass and paper, reprocessed into lower-grade versions of themselves, and discarded products upcycled into entirely new designs. (Emeco's 111 Navy chair, made from 111 used Coca-Cola bottles, is a good example.)

But today we're witnessing the emergence of a new recycling trend, driven by the luxury design industry. These versatile materials, substitutes for conventional woods, plastics and stone, come in sheet or tile form, ready to be cut, shaped and manipulated by architects and fellow designers.

World's top designers reinvent the classic park bench

Perhaps because they're being developed by the very designers who are meant to use them, rather than by the manufacturing industry, they're decidedly decorative and attractive as well as strong, economical and easy to use. They bear all the attributes of the materials they might substitute and in some cases, more.

Read: How hospital design can transform the way you're cared for

One of companies at the forefront is Really, a Danish company that transforms transform used textiles into a sheet material similar to plywood. This past April, the brand revealed its debut collection, a series of benches by designer Max Lamb, at the Salone del Mobile design fair in Milan. Really's warm reception and critical success proved not only the creative potential of these new materials, but also that there is a healthy appetite for them in the design community.

How a Teen's Day at the Beach Turned Into a Climate Breakthrough

Ethan Novek's brainstorm led him to research at Yale while he was still in high school.

By Christina Nunez

PUBLISHED June 30, 2017

Transformational ideas can come from anywhere. From anyone. National Geographic’s Chasing Genius is now soliciting ideas around three issues: sustainable planet, global health, and feeding nine billion. Could your solution be a spark of genius? Check out the challenge, where the best ideas for improving our world each can win $25,000.

Ethan Novek was digging a hole in the sand at a beach in Connecticut when he noticed something that got him thinking. The seawater that seeped into the hole was rising at the same rate as the tides offshore. What if that seepage could be harnessed for energy in wells onshore, rather than out in the trickier ocean environment?

That idea kicked off the first in a chain of experiments that eventually landed Novek at Yale University, leading a research project while he was still in high school. Starting after that summer at the beach before his freshman year, Novek ran successive experiments at his school's lab. By the time he was a sophomore, he had arrived at a simple and economical way to capture carbon dioxide from the air and convert it into useful products. He recently became a semifinalist in the $20 million NRG COSIA Carbon XPRIZE.

In conversation, Novek is almost breathless as he describes his research. There's a lot to cover. His "weird route," as he calls it, from that seaside brainstorm to carbon capture began with filing a patent for a tidal energy system, which then led him to research the energy potential of salinity gradients between freshwater and ocean water, which led him to explore parallel reactions between ammonia and carbon dioxide. Fully appreciating his rapid-fire account of the multiple threads of inquiry that led to his carbon capture idea might require an advanced education in chemistry.

But Novek is certainly not alone in his fascination with carbon capture, which offers a tantalizing but elusive prospect: If we can siphon planet-warming carbon dioxide emissions away from the smokestack before they hit the atmosphere, then maybe the imperative to phase out fossil fuels becomes a bit less dire.

Despite the world's wealth of carbon dioxide, extracting it on a commercial scale— where it can be stored underground or recycled into other products—has been a tough nut to crack. Billions of dollars have disappeared into high-profile "clean coal" projects in Canada and Mississippi (work on the latter was just suspended) without producing the hoped-for results.

"I've always been inspired by the concept of turning waste products into valuable materials," says Novek. "I love the concept of being able to bring everyone in the world to a higher standard of living without running out of the resources we have on Earth."

To him, carbon capture has less to do with extending the use of coal for electricity and more to do with industrial pollution. He points out that even if we could manage to get all our electricity from renewable sources such as wind and solar, the world would still be left with carbon dioxide emisisons from steel and cement factories, for example, which aren't nearly as far along as the power sector in converting away from fossil fuels.

"How do you deal with that aspect?" he asks of those industrial sites. "That’s where CO2 capture comes into play."

As Novek read papers on salinity gradients for his tidal energy idea, he saw a recurring name: Menachem Elimelech. So he wrote to the Yale professor, expressing interest in his research and peppering him with questions via email. He got no reply.

In the meantime, Novek kept pursuing his experiments, which eventually led him to carbon dioxide. After winning awards including first place at Intel's International Science and Engineering Fair for his mechanism to turn carbon dioxide and ammonia into urea, a main component of fertilizer, he wrote again to Elimelech with an update.

This time, he got a reply. Congratulating Novek on his persistence, Elimelech wanted to know whether the teen would present his research at Yale and publish a peer-reviewed journal article on the topic.

Novek spent his last two years of high school on Yale's campus, shifting from the urea idea to his carbon capture and reuse concept, which focuses on organic solvents that release carbon dioxide from flue gas at room temperature, using 96 percent less energy than existing carbon capture processes. He is also working on an idea to use waste hydrogen sulfide from oil and gas operations to convert carbon dioxide into carbon monoxide, which is used to produce chemicals and plastics.

Working with Elimelech and graduate students, he published a peer-reviewed paper and entered the XPRIZE competition while earning his high school degree remotely.

"I have had many, many bright students come and go," says Andrew Bramante, who led Novek's Independent Science Research class at Greenwich High School in Connecticut. "Ethan, however, is a breed apart."

Bramante guesses Novek "likely spent his class time dreaming of his inventions, rather than completing the requisite class work." Novek confirms as much, saying that before he moved to New Haven, he was having trouble balancing the time commitments of high school with his Yale research.

But Novek says the early freedom to experiment in his high school lab and at home is what helped him succeed. His parents aren't science-minded, he says, so there was no one to tell him not to bother with an experiment that might be bound to fail but would give him valuable insight.

"I actually experienced grit, in a sense—to really be able to realize why something doesn't work," he says, "instead of someone telling me why something doesn't work." He also relied heavily on Web research, looking up papers in journals: "If the Internet didn't exist, I don't know where I would be right now."

Now 18, Novek has founded a company, Innovator Energy, to pursue his carbon capture and conversion techniques and has relocated to San Antonio, Texas, where he is building a prototype of his system with a team at the Southwest Research Institute. He says his technology can capture carbon dioxide at $8 per ton, well below the current market price (around $13 per ton in California, for example).

Though recently admitted at Yale, where he plans to study chemical engineering, Novek is taking a gap year to focus on his carbon research. With that year, he says, he aims to "create the biggest impact in the shortest period of time."

What makes people dislike wind? It's not NIMBY

Christa Marshall, E&E News reporter, Greenwire: Thursday, June 29, 2017

The "not in my backyard" barrier to wind development has been overstated, and other factors play much more of a role in whether communities support the resource, according to a report from Lawrence Berkeley National Laboratory.

The report surveys three decades of research on the factors driving pockets of opposition to wind farms and influencing community "acceptance." In general, 70 to 90 percent of Americans have favorable views of wind power, but local conflicts can develop, researchers said.

"The rapid growth of wind energy in recent years has increased its footprint, bringing the issue of community acceptance to the forefront. Cooperation between wind development actors and those in the host communities is critical to successful deployment processes, and therefore understanding local attitudes and factors driving acceptance and opposition is an essential first step," the lab said in a release.

Despite frequent citing of NIMBY fights, the research doesn't support that the concept matters much, the report said.

Instead, socioeconomic factors such as how wind alters job development in a given area, or how landowners are compensated for wind farms, play a significant role in whether opposition or support develops.

Compensation to landowners "may be correlated to acceptance, but it also can create community conflict, exacerbate inequality, and be seen as bribery," the report stated.

Also, the sound and visual impacts of wind turbines can be strongly tied to "annoyance and opposition," according to the report. Ignoring those concerns can exacerbate conflict, it said.

"Negative attitudes stemming from the visual impacts of wind turbines may not occur simply because people dislike how turbines look; people also have become accustomed to an electricity system that is essentially 'invisible' to consumers owing to centralized infrastructure typically sited far from population centers," the report said.

Other studies have correlated opposition to concerns about the sound of turbines and worries about health risks. Research has shown consistently that there are not health risks, or impacts on sleep quality, but the perception that it is so can still create pushback, according to the report. Making the public more aware of what the effects actually are could alter perceptions, it said.

Environmental factors matter, too, although perhaps less than other ones. "The direction of the correlation remains unclear," the lab said.

Environmental factors can cut both ways, according to various studies. Concerns about climate change, for instance, can increase support, while the perception that birds could be harmed — even if untrue — can generate backlash.

Overall, there needs to be a lot more research on what drives community acceptance, since there are contradictory findings, the report said. Research over the past 30 years has also been "slow into transition into practice."

It's still unclear, for instance, whether placing turbines a certain distance from individuals matters, and if so, exactly what that distance should be.

What is known is that an area's demographics provide little insight into likely acceptance of wind farms.

"Demographic variables such as gender, income, and education level do little to explain variation in wind energy attitudes. Some studies have shown contradictory evidence," the report said.

Ensuring that a community feels like the wind planning process was fair, and that residents were given a say — or were at least made aware of all developments — can also make a big difference.

"In some wind development models, local citizens have been entirely removed from the planning and design of wind developments. This may lead to feelings of injustice among local residents, who perceive that government and corporate decision-making ... takes place in faraway boardrooms," the report said.

----------------------------------------------------------------

Thirty years of North American wind energy acceptance research: What have we learned?

Authors: Joseph Rand; Ben Hoen

Date Published: 06/2017

Abstract:

Thirty years of North American research on public acceptance of wind energy has produced important insights, yet knowledge gaps remain. This review synthesizes the literature, revealing the following lessons learned. (1) North American support for wind has been consistently high. (2) The NIMBY explanation for resistance to wind development is invalid. (3) Socioeconomic impacts of wind development are strongly tied to acceptance. (4) Sound and visual impacts of wind facilities are strongly tied to annoyance and opposition, and ignoring these concerns can exacerbate conflict. (5) Environmental concerns matter, though less than other factors, and these concerns can both help and hinder wind development. (6) Issues of fairness, participation, and trust during the development process influence acceptance. (7) Distance from turbines affects other explanatory variables, but alone its influence is unclear. (8) Viewing opposition as something to be overcome prevents meaningful understandings and implementation of best practices. (9) Implementation of research findings into practice has been limited. The paper also identifies areas for future research on wind acceptance. With continued research efforts and a commitment toward implementing research findings into developer and policymaker practice, conflict and perceived injustices around proposed and existing wind energy facilities might be significantly lessened.

Could a Trade Dispute with China Bring an End to U.S. Solar Boom?

Low-cost solar cells produced in China have helped power the recent surge in the U.S. solar industry. But a case now before the federal International Trade Commission could lead to tariffs that would jeopardize U.S. solar’s rapid growth.

By Marc Gunther • June 27, 2017

Cheap Chinese solar cells have powered a boom in the U.S. solar industry. They have helped drive down the cost of making electricity from sunlight by about 70 percent since 2010, leading to double-digit growth rates in rooftop and utility-scale installations, according to the industry. Last year, for the first time, solar added more generating capacity to the electricity grid than any other fuel, including natural gas. That’s welcome news to those who worry about climate change.

Now, though, the solar boom may be in jeopardy. The U.S. International Trade Commission, an independent federal agency, has begun an investigation that could lead to sweeping trade protections against the imports that would raise the costs of solar power and could bring a halt to solar’s rapid U.S. growth.

If the trade commission finds that imports caused serious harm to U.S. solar manufacturers, it will recommend trade remedies, which could potentially include tariffs on all imported solar products. President Donald Trump, a champion of U.S. manufacturing, would get the final word on any action — a prospect that has the solar industry in a tizzy.

The prospect of global tariffs “poses an existential threat to the broad solar industry and its 260,000 American jobs,” says Abigail Ross Hopper, the chief executive of the Solar Electric Industries Association, the industry’s largest trade organization. Most solar jobs in the United States are in sales and installation, not manufacturing, but tariffs could drive up the cost of solar and make it less competitive.

The trade investigation began in response to a petition filed by Suniva, a bankrupt manufacturer of solar cells and panels based in suburban Atlanta, with factories in Georgia and Michigan. Suniva, the second largest U.S. solar panel maker by volume, has been joined in the case by SolarWorld Americas, the largest U.S. solar panel manufacturer, which has a factory in Oregon.

U.S. solar manufacturers “simply cannot survive” in a market where foreign imports “have unexpectedly exploded and prices have collapsed,” Suniva said in its petition. SolarWorld Americas said it decided to join with Suniva because “massive overproduction” of Chinese solar cells and panels has “led to the near-destruction of remaining solar producers in America.”

The domestic solar firms have asked the Trump administration to impose steep tariffs on all imported solar cells, which are the devices inside solar panels that convert sunlight into electricity, and to set a floor price on solar panels containing imports. Those measures would roughly double the cost of imported panels, analysts say.

Experts agree that China has subsidized its giant solar manufacturers with free or low-cost loans and artificially cheap raw materials.

Some industry analysts say higher costs for solar will slow the industry’s growth. According to a report from analyst IHS Markit, demand for U.S. solar photovoltaics could be reduced by 60 percent over the next three years if the trade commission grants Suniva’s petition.

Hugh Bromley, an industry analyst with Bloomberg New Energy Finance, said in a note to clients that Suniva’s accusations are “riddled with holes and hypocrisies.” Still, he adds, “Those may not matter if the case makes its way to President Trump’s desk.”

As a candidate and as president, Trump has vowed to enforce U.S. trade laws as a way to strengthen the nation’s manufacturing base. Slapping tariffs on imported solar panels would benefit not only domestic solar manufacturers but traditional energy producers, including the coal industry, that compete with solar.

The International Trade Commission (ITC) has already made one statement about the Suniva petition — that, within the meaning of trade law, it is “extraordinarily complicated.” About that, no one disagrees. For starters, Suniva is majority-owned by Shunfeng International Clean Energy, a Chinese company that opposes Suniva’s petition, and SolarWorld Americas is a subsidiary of an insolvent German firm. Yet both are taking a stance against imports into the U.S.

How can that be? In Suniva’s case, the petition is being driven by SQN Capital Management, a New York-based asset manager that made $51 million in loans to Suniva and spent another $4 million on legal fees. In a letter to the China Chamber of Commerce for Import & Export of Machinery & Electronic Products, SQN offered to drop the petition if a buyer could be found for Suniva’s manufacturing equipment, which SQN says is worth $55 million. The Chinese declined to make a deal, and the issue became moot when SolarWorld Americas entered the case and the ITC decided to investigate.

This isn’t the first time that U.S. solar manufacturers have sought trade sanctions. In 2012, the Obama administration imposed modest tariffs on Chinese imports after finding that the Chinese government provided illegal export subsidies to its manufacturers. Two years later, it extended the tariffs to Taiwan. Those moves were prompted by cases brought by SolarWorld Americas.

Nevertheless, solar imports to the U.S. continued to surge — from $5.1 billion in 2012 t0 $8.3 billion in 2016, according to Suniva — as Chinese companies built factories in Thailand, Vietnam, and Malaysia, which were unaffected by the tariffs. Last year, the U.S. imported $520 million in panels from Thailand, up from almost nothing in 2012, and another $514 million from Vietnam, up from less than $1 million in 2012, according to Suniva.

.

That’s why Suniva now wants tariffs imposed globally. “Without global relief, the domestic industry will be playing ‘whack-a-mole’ against [solar cells] and modules from particular countries,” says Matthew McConkey, a lawyer for Suniva, in the petition to the ITC.

Trade experts agree that China has subsidized its giant solar manufacturers. Beijing and provincial governments provided free or low-cost loans; artificially cheap raw materials, components, and land; support for research and development; and a demand that was artificially driven by domestic regulation, according to Usha C.V. Haley and George Haley, wife-and-husband authors of a 2013 book, “Subsidies to Chinese Industry: State Capitalism, Business Strategy and Trade Policy.”

“Production in China is still heavily subsidized,” says Usha Haley, a professor of management at West Virginia University. “There is little doubt in my mind that it is going to become a monopoly producer — and then, of course, they will raise prices.”

What’s more, a solar industry dominated by a handful of Chinese companies will have little incentive to innovate, argues Stephen Ezell, a vice president at the Information Technology & Innovation Foundation, a Washington think tank. The U.S. industry, which invented solar photovoltaics and still leads the world in solar patents, simply will not have the resources it needs to invest in research.

“These industries are fundamentally about generating the next-generation product,” Ezell says. “We’re getting locked into a lower level of technological development.” In the long run, that would make it more difficult for the global solar industry to dislodge its fossil-fuel competitors.

Still, U.S. firms that complain about subsidies run the risk of being called hypocrites. Suniva, for instance, enjoyed state tax incentives for its Michigan plant, and “many other U.S. solar manufacturers have received tax, grant, and loan guarantee incentives,” says analyst Hugh Bromley. SolarCity, a unit of Tesla, is building a $900 million factory in Buffalo, New York, to make solar panels, with major subsidies.

In making any decision on tariffs, the Trump administration will be free to consider jobs and the impact of higher solar prices on consumers.

Other solar manufacturers headquartered in the U.S., including SunPower and First Solar, have located a majority of their manufacturing offshore and so would be affected by any tariffs. In fact, SunPower has joined with the Solar Electric Industries Association to oppose the tariffs. SunPower, First Solar, and SolarCity all declined to comment on the trade issue.

Some analysts believe that the industry will be able to adjust to the tariffs. Setting a floor price for panels will lead developers to choose higher-efficiency, higher-cost panels that will enable other price reductions along the supply chain, says Roberto Rodriguez Labastida, an analyst with Navigant Research. “There will be some shake-ups and adjustments,” he says, but nothing like the meltdown being forecast by some.

The ITC will decide in September whether U.S. manufacturers have been injured. Shara Aranoff, a former chair of the commission who is now a corporate lawyer, says the nonpartisan ITC commissioners will be guided by the law and the facts. “It is one of the most independent agencies in the federal government,” she said.

If the commission finds harm, the issue moves into the political arena. Already, Daniel Kildee, a Democratic congressman from Michigan, and Rob Woodall, a Republican congressman from Georgia, have called for trade remedies. The solar industry is arguing that tariffs will kill many more jobs than they will save, and that there are better ways to protect U.S. manufacturing.

In making any decision, the Trump administration would be free to take anything into account — jobs, the impact of higher solar prices on consumers, and, at least in theory, the environment. Few would expect the environment to be high on the administration’s priority list here. But what will the president do? As with so many issues in Washington these days, that’s anybody’s guess.

Marc Gunther has reported on business and sustainability for Fortune, The Guardian, and GreenBiz. He now writes about foundations, nonprofits, and global development on his blog, Nonprofit Chronicles.

Southern Co.'s clean coal plant hits a dead end

Kristi E. Swartz, E&E News reporter

Energywire: Thursday, June 22, 2017

Southern Co.'s $7.5 billion clean coal plant in Mississippi should run as a natural gas plant, state regulators said yesterday, throwing a gut punch to the utility's hopes of recovering billions of dollars in cost overruns and paving the way for next-generation coal plants.

The Kemper County Energy Facility would be the first large coal-burning power plant in the United States to capture and store the majority of its carbon dioxide emissions. The power plant is supposed to gasify lignite coal into synthetic gas, capturing the CO2 for use in enhanced oil recovery.

The plant has been running on natural gas for years, and each of Kemper's two gasifiers has successfully produced electricity from synthetic gas. But Mississippi Power, the Southern Co. unit building the plant, has struggled to keep the plant's complex systems running nonstop. That has delayed its full startup.

The surprisingly aggressive move by the three-member panel of Mississippi regulators is a significant financial and political setback for Southern, which has promoted the idea of building a first-of-its-kind clean coal plant at scale.

Mississippi Power, for its part, could be stuck with a massive bill the state won't allow it to pass on to electricity customers. Instead, the state Public Service Commission wants the utility to plan on running on gas.

Kemper originally was supposed to be operating in 2014, and its original price tag was a little under $3 billion. Now at $7.5 billion and a projected startup date of the end of the month, Kemper is at a crossroads. Mississippi Power filed a new rate plan for Kemper at the beginning of the month, as required by the PSC, but the utility had to hold off on formally asking to collect $4.3 billion from customers because the plant is not yet operating.

It was that rate filing that triggered the commission's response, which came after a special meeting and a closed executive session. The PSC directed the utility to work with the staff attorney and other stakeholders to come up with a settlement within 45 days.

The settlement should allow Kemper to operate on natural gas only and not raise customer rates, the commission said. What's more, Mississippi Power should find a way to lower rates, especially for its residential class of customers.

"No rate increases, period," said PSC Chairman Brandon Presley. "And, none of the gasifier assets will ever be included in rate base."

The PSC is scheduled to review an order addressing its directives at a July meeting.

"It's obvious to anybody that this project has had its challenges," Presley said. "Because they made the rate filing, we felt it was time for the commission to give guidance to what we felt the public interest is."

'Other options'

If the utility and PSC staff cannot negotiate a settlement within 45 days, "we'll resort to other options," Presley said.

"We look forward to reviewing the order," Mississippi Power replied after the PSC meeting.

The ability to run nonstop as a clean coal plant is key for the utility to recoup costs from customers under its current regulatory certificate. As a result, the project's operation date has been moved several times including a series of one-month delays that started at the end of last year.

"I applaud the Mississippi Public Service Commissioners with their decision today to pull the plug on big bets gone bad, which has resulted in the most expensive power plant built in the United States that still does not run as advertised," said Louie Miller, state director for the Sierra Club's Mississippi chapter.

Kemper has operated for about 200 days using lignite, according to Mississippi Power. But the company said it may need to redesign a critical part for its next-generation coal power plant, and that may take 18 months to two years to complete.

"If the project isn't working at this point in time, it's not surprising that the commission is concerned about ratepayers paying for it," said Paul Patterson, a utility analyst with Glenrock Associates LLC.

Patterson said he doesn't think it means the utility cannot recover all of the costs in question. But it has to work out those details with the PSC staff and other stakeholders.

"The real question can be whether they will reach a settlement or whether there will be some litigation," he said.

Not being able to recover the bulk of the outstanding costs could cripple Mississippi Power financially. Regulators agreed to $159 million in emergency rate relief in August 2015 after the utility's CEO testified that the company was on the brink of bankruptcy and would run out of cash by the end of the year.

Kemper was running as a natural gas plant at that time. Costs continued to rise, and an earlier settlement with regulators capped what the utility eventually could ask to recover from its customers.

That agreement, in part, has led to Southern's shareholders absorbing roughly $3 billion from Kemper at this point. Mississippi Power's current liabilities exceeded current assets by about $1.2 billion as of the end of March, according to a recent Securities and Exchange Commission filing.

"Southern intends to provide Mississippi Power with loans and/or equity contributions sufficient to fund the remaining indebtedness scheduled to mature and other cash needs over the next 12 months" if sufficient funds are not available from external sources, the filing said.

A campaign to eliminate plastic straws is sucking in thousands of converts

Discarded plastic straws pose an environmental hazard. There is a growing movement to eliminate them altogether. (Helen Lockhart/Two Oceans Aquarium)

By Darryl Fears June 24

It started so innocently. A kid ordered a soda in a restaurant.

“It came with a plastic straw in it,” Milo Cress recalled. He glared at the straw for a while. “It seemed like such a waste.”

Not only did Cress yank the plastic from his drink, but he also launched a campaign, “Be Straw Free,” targeting all straws as needless pollution. He knocked on the doors of restaurants in Burlington, Vt., where he lived at the time, and asked managers not to offer straws unless patrons asked. He was 9 years old.

Today Cress, 15, is one of the faces of a growing movement to eliminate plastic straws. They have been found wedged in the nose of a sea turtle, littering the stomachs of countless dead marine animals and scattered across beaches with tons of other plastics.

Why single out pollution as small and slim as a drinking straw?

Straws are among the most common plastic items volunteers clean from beaches, along with bottles, bags and cups, conservationists say. Americans use half a billion straws every day, at least according to an estimate by Be Straw Free, based on information from straw manufacturers. That many straws could wrap around the Earth 2½ times.

The slightest wind lifts plastic straws from dinner tables, picnic blankets and trash dumps, depositing them far and wide, including in rivers and oceans, where animals often mistake them for food.

And they are ubiquitous. Nearly every chain restaurant and coffee shop offers straws. They’re in just about every movie theater and sit-down restaurant. Theme parks and corner stores and ice cream shops and school cafeterias freely hand them out.

But they are starting to disappear because of the awareness campaign Cress and dozens of conservation groups are waging. Walt Disney World’s Animal Kingdom bans them, as do the food concession areas of Smithsonian Institution museums.

Keith Christman, a managing director for plastics markets at the American Chemistry Council, which promotes plastics manufacturers and fights attempts to ban plastic, said in a National Geographic article two months ago that the group would do the same for attempts to eliminate plastic straws.

But a spokeswoman for the council said “we won’t be able to offer comment” or say whether the group backs Christman’s claim.

The movement was growing at a slow, steady pace when Cress joined it six years ago, but it exploded after a YouTube video of a sea turtle with a straw stuck in its nose went viral in 2015. The cringe-inducing effort to pull the plastic out of a bloody nostril outraged viewers — 11.8 million so far.

Cress has launched a website on the issue, partnered with several organizations that support the cause and testified against straws in the Vermont legislature. Colorado Gov. John Hickenlooper (D) cited Cress’s activism in a 2013 proclamation that made July 11 a straw-free day in the state.

Manhattan Beach outside Los Angeles banned all disposable plastics, including straws. Berkeley, Calif., is considering a ban. Restaurants in San Diego; Huntington Beach, Calif.; Asbury Park, N.J.; New York; Miami; Bradenton, Fla.; London; and British Columbia have pledged to ban straws or withhold them until patrons ask for them.

The Plastic Pollution Coalition estimates that 1,800 “restaurants, organizations, institutions and schools worldwide have gotten rid of plastic straws or implemented a serve-straws-upon-request policy,” said Jackie Nunez, founder of a group called the Last Plastic Straw.

More than 20 such restaurants near Wrightsville Beach, N.C., signed up last year to be certified by a coalition of groups as establishments that won’t serve straws unless they’re requested.

Ginger Taylor, a volunteer who cleans trash from the five-mile beach, said the campaign is working, at least anecdotally.

“I’ve been picking up straws on Monday morning on that same stretch of beach for five years,” she said. Four years ago, she picked up 248 straws in about two weeks. The next two years, she collected about 500. But the number fell to 158 after the awareness campaign started last year.

Diana Lofflin, founder of a group called Straw Free, said the turtle video inspired her year-old organization. Her volunteers persuaded California’s Joshua Tree Music Festival to go straw-free in May. They also knock on the doors of Orange County, Calif., homeowners who grow bamboo to ask whether they can harvest a little and make reusable straws from the plant. Like several other groups, Straw Free sells reusable bamboo straws online, theirs in packs of 10 for $1.50.

Xanterra Parks & Resorts, a concessions company that partners with the National Park Service to provide food and lodging at Rocky Mountain National Park, the Grand Canyon and other national parks, offers straws at dispensers but posts fliers asking patrons not to use them.

“Humans didn’t really evolve around straws. It’s not like we have to consume fluids with this appendage. What really, what is this?” said Catherine Greener, vice president of sustainability for the company.

The prevailing notion says flexible straws were invented in the late 19th century by Marvin Stone, a D.C. man who didn’t like how the traditional ryegrass straw people used for drinking would disintegrate and leave gritty residue in his mint juleps. Stone wrapped strips of paper around a pencil, glued the strips together and test-marketed the contraption, and in 1888, the disposable straw was born, according to the Smithsonian’s Lemelson Center for the Study of Invention and Innovation.

The new paper straw was limited mostly to use in hospitals, which used the innovation to avoid spreading disease. Usage widened during the polio epidemic that began in 1900 as people avoided putting their mouths on others’ drinking glasses. Finally in the 1960s, restaurants offered a new invention: a disposable plastic straw.

It’s a convenience people seem to use arbitrarily. Millions drink soda with a straw, but hardly any suck beer through one. Hot-coffee drinkers gulp directly from cups but stick straws in iced coffee. Bar hoppers drink highballs from a glass, but mixed cocktails come with a straw.

“There are plenty of times when straws just aren’t necessary,” said Aaron Pastor, a restaurant consultant and one of dozens of vendors who sell stainless steel, bamboo and other reusable straws online.

“I’ve sold thousands of [reusable] straws,” Pastor said, but it’s not a booming business. “This isn’t paying my mortgage.”

Pastor said chastising plastic straw users isn’t his style. “If your goal isn’t to preach and come across as ‘I’m better than you,’ that’s best. I just say they’re wasteful, they end up in oceans and, hey, do you really need one.”

At the Smithsonian’s National Museum of Natural History in Washington, Melissa and Brian Charon said no. The parents, visiting from Boston with small children who grabbed at food spread on a table, shrugged when told the museum doesn’t offer straws.

“It’s fine with me. I don’t really miss it,” Melissa said. Then Brian added: “I look in the drawer in our kitchen every day and say, ‘Why do we have so many straws?’ ”

At Xanterra’s national parks concessions, “We want people to think about this throwaway society, especially in these beautiful places,” Greener said. “They can take to the air. It’s easy for them to get blown around.”

The anti-straw message is also getting blown around. Greener was looking for composting tips a few years ago when she came across a profile of Cress, who had partnered with a recycling center in Colorado called Eco-Cycle.

Greener wanted to talk to the kid, who by then was powering the anti-straw movement in Colorado, where his family had moved. Based on their conversation, Greener decided to promote straw awareness at Xanterra’s concessions.

“He’s obviously a gifted teen. He’s probably running for Congress. He was very inspirational and innovative,” Greener said. “All I wanted to do at his age was get my driver’s license. I look at kids like that, and it makes me very hopeful.”

Cress isn’t running for office — yet. He said he’s enjoying a six-year passion that has taken him to Australia, Portugal, Germany, France, Boston, Washington and many high schools all over to deliver speeches.

“My favorite part about it has been getting to talk to other kids and listening to their ideas,” Cress said. “It’s really cool, and I think it’s really empowering. I certainly feel like I’m listened to and valued in a larger community, and I really appreciate that.”

California to list herbicide as cancer-causing; Monsanto vows fight

By Karl Plume

Glyphosate, an herbicide and the active ingredient in Monsanto Co's popular Roundup weed killer, will be added to California's list of chemicals known to cause cancer effective July 7, the state's Office of Environmental Health Hazard Assessment (OEHHA) said on Monday.

Monsanto vowed to continue its legal fight against the designation, required under a state law known as Proposition 65, and called the decision "unwarranted on the basis of science and the law."

The listing is the latest legal setback for the seeds and chemicals company, which has faced increasing litigation over glyphosate since the World Health Organization's International Agency for Research on Cancer said that it is "probably carcinogenic" in a controversial ruling in 2015.

Earlier this month, Reuters reported that the scientist leading the IARC’s review knew of fresh data showing no link between glyphosate and cancer. But he never mentioned it, and the agency did not take the information into account because it had yet to be published in a scientific journal. The IARC classed glyphosate as a “probable carcinogen,” the only major health regulator to do so.

Dicamba, a weed killer designed for use with Monsanto's next generation of biotech crops, is under scrutiny in Arkansas after the state's plant board voted last week to ban the chemical.

OEHHA said the designation of glyphosate under Proposition 65 will proceed following an unsuccessful attempt by Monsanto to block the listing in trial court and after requests for stay were denied by a state appellate court and the California's Supreme Court.

Monsanto's appeal of the trial court's ruling is pending.

"This is not the final step in the process, and it has no bearing on the merits of the case. We will continue to aggressively challenge this improper decision," Scott Partridge, Monsanto's vice president of global strategy, said.

Listing glyphosate as a known carcinogen under California's Proposition 65 would require companies selling the chemical in the state to add warning labels to packaging. Warnings would also be required if glyphosate is being sprayed at levels deemed unsafe by regulators.

Users of the chemical include landscapers, golf courses, orchards, vineyards and farms.

Monsanto and other glyphosate producers would have roughly a year from the listing date to re-label products or remove them from store shelves if further legal challenges are lost.

Monsanto has not calculated the cost of any re-labeling effort and does not break out glyphosate sales data by state, Partridge said.

Environmental groups cheered OEHHA's move to list the chemical.

"California's decision makes it the national leader in protecting people from cancer-causing pesticides," said Nathan Donley, a senior scientist at the Center for Biological Diversity.

Ozone hole recovery threatened by rise of paint stripper chemical

The restoration of the ozone hole, which blocks harmful radiation, will be delayed by decades if fast-rising emissions of dichloromethane are not curbed

Dichloromethane, found in paint-stripping chemicals, has a relatively short lifespan so action to cut its emissions would have rapid benefits. Photograph: Justin

Tuesday 27 June 2017 11.00 EDT, Last modified on Tuesday 27 June 2017 20.05 EDT

The restoration of the globe’s protective shield of ozone will be delayed by decades if fast-rising emissions of a chemical used in paint stripper are not curbed, new research has revealed.

Atmospheric levels of the chemical have doubled in the last decade and its use is not restricted by the Montreal protocol that successfully outlawed the CFCs mainly responsible for the ozone hole. The ozone-destroying chemical is called dichloromethane and is also used as an industrial solvent, an aerosol spray propellant and a blowing agent for polyurethane foams. Little is known about where it is leaking from or why emissions have risen so rapidly.

The loss of ozone was discovered in the 1980s and is greatest over Antarctica. But Ryan Hossaini, at Lancaster University in the UK and who led the new work, said: “It is important to remember that ozone depletion is a global phenomenon, and that while the peak depletion occurred over a decade ago, it is a persistent environmental problem and the track to recovery is expected to be a long and bumpy one.”

“Ozone shields us from harmful levels of UV radiation that would otherwise be detrimental to human, animal and plant health,” he said.

The new research, published in the journal Nature Communications, analysed the level of dichloromethane in the atmosphere and found it rose by 8% a year between 2004 and 2014. The scientists then used sophisticated computer models to find that, if this continues, the recovery of the ozone layer would be delayed by 30 years, until about 2090.

The chemical was not included in the 1987 Montreal protocol because it breaks down relatively quickly in the atmosphere, usually within six months, and had not therefore been expected to build up. In contrast, CFCs persist for decades or even centuries.

But the short lifespan of dichloromethane does mean that action to cut its emissions would have rapid benefits. “If policies were put in place to limit its production, then this gas could be flushed out of the atmosphere relatively quickly,” said Hossaini.

If the dichloromethane in the atmosphere was held at today’s level, the recovery of the ozone level would only be delayed by five years, the scientists found. There was a surge in emissions in the period 2012-14 and if growth rate continues at that very high rate, the ozone recovery would be postponed indefinitely, but Hosseini said this extreme scenario is unlikely: “Our results still show the ozone hole will recover.”

Grant Allen, an atmospheric physicist at the University of Manchester, said: “Whatever the source of this gas, we must act now to stop its release to the atmosphere in order to prevent undoing over 30 years of exemplary science and policy work which has undoubtedly saved many lives.”

Ozone layer hole appears to be healing, scientists say

Jonathan Shanklin, one of the scientists at the British Antarctic Survey (BAS) who discovered the ozone hole in 1985, said: “The Montreal protocol has proved very effective at reducing the emissions of substances that can harm the ozone layer. I am sure that the warning made in this paper will be heeded and that dichloromethane will be brought within the protocol in order to prevent further damage to the ozone layer.”

There are other short-lived gases containing the chlorine that destroys ozone, but few measurements have been taken of their levels in the atmosphere. “Unfortunately there is no long-term record of these, only sporadic data, but these do indicate they are a potentially significant source of chlorine in the atmosphere,” said Hossaini, adding that further research on this was needed.

Anna Jones, a scientist at BAS, said: “The new results underline the critical importance of long-term observations of ozone-depleting gases and expanding the Montreal protocol to mitigate new threats to the ozone layer.”

Overall the Montreal protocol is seen as very successful in cutting ozone losses, with estimates indicating that without the protocol the Antarctic ozone hole would have been 40% larger by 2013. Scientists discovered four “rogue” CFCs in 2014 that were increasing in concentration in the atmosphere and contributing to ozone-destruction.

Researchers Found They Could Hack Entire Wind Farms

Over two years, University of Tulsa researchers performed penetration tests on five different wind farms. (Not this one.)Ross Mantle for WIRED

S

On a sunny day last summer, in the middle of a vast cornfield somewhere in the large, windy middle of America, two researchers from the University of Tulsa stepped into an oven-hot, elevator-sized chamber within the base of a 300-foot-tall wind turbine. They’d picked the simple pin-and-tumbler lock on the turbine’s metal door in less than a minute and opened the unsecured server closet inside.

Jason Staggs, a tall 28-year-old Oklahoman, quickly unplugged a network cable and inserted it into a Raspberry Pi minicomputer, the size of a deck of cards, that had been fitted with a Wi-Fi antenna. He switched on the Pi and attached another Ethernet cable from the minicomputer into an open port on a programmable automation controller, a microwave-sized computer that controlled the turbine. The two men then closed the door behind them and walked back to the white van they’d driven down a gravel path that ran through the field.

Staggs sat in the front seat and opened a MacBook Pro while the researchers looked up at the towering machine. Like the dozens of other turbines in the field, its white blades—each longer than a wing of a Boeing 747—turned hypnotically. Staggs typed into his laptop's command line and soon saw a list of IP addresses representing every networked turbine in the field. A few minutes later he typed another command, and the hackers watched as the single turbine above them emitted a muted screech like the brakes of an aging 18-wheel truck, slowed, and came to a stop.

For the past two years, Staggs and his fellow researchers at the University of Tulsa have been systematically hacking wind farms around the United States to demonstrate the little-known digital vulnerabilities of an increasingly popular form of American energy production. With the permission of wind energy companies, they’ve performed penetration tests on five different wind farms across the central US and West Coast that use the hardware of five wind power equipment manufacturers.

As part of the agreement that legally allowed them to access those facilities, the researchers say they can't name the wind farms’ owners, the locations they tested, or the companies that built the turbines and other hardware they attacked. But in interviews with WIRED and a presentation they plan to give at the Black Hat security conference next month, they're detailing the security vulnerabilities they uncovered. By physically accessing the internals of the turbines themselves—which often stood virtually unprotected in the middle of open fields—and planting $45 in commodity computing equipment, the researchers carried out an extended menu of attacks on not only the individual wind turbine they'd broken into but all of the others connected to it on the same wind farm's network. The results included paralyzing turbines, suddenly triggering their brakes to potentially damage them, and even relaying false feedback to their operators to prevent the sabotage from being detected.

“When we started poking around, we were shocked. A simple tumbler lock was all that stood between us and the wind farm control network,” says Staggs. “Once you have access to one of the turbines, it’s game over.”

In their attacks, the Tulsa researchers exploited an overarching security issue in the wind farms they infiltrated: While the turbines and control systems had limited or no connections to the internet, they also lacked almost any authentication or segmentation that would prevent a computer within the same network from sending valid commands. Two of the five facilities encrypted the connections from the operators’ computers to the wind turbines, making those communications far harder to spoof. But in every case the researchers could nonetheless send commands to the entire network of turbines by planting their radio-controlled Raspberry Pi in the server closet of just one of the machines in the field.

“They don’t take into consideration that someone can just pick a lock and plug in a Raspberry Pi,” Staggs says. The turbines they broke into were protected only by easily picked standard five-pin locks, or by padlocks that took seconds to remove with a pair of bolt cutters. And while the Tulsa researchers tested connecting to their minicomputers via Wi-Fi from as far as fifty feet away, they note they could have just as easily used another radio protocol, like GSM, to launch attacks from hundreds or thousands of miles away.

The researchers developed three proof-of-concept attacks to demonstrate how hackers could exploit the vulnerable wind farms they infiltrated. One tool they built, called Windshark, simply sent commands to other turbines on the network, disabling them or repeatedly slamming on their brakes to cause wear and damage. Windworm, another piece of malicious software, went further: It used telnet and FTP to spread from one programmable automation controller to another, until it infected all of a wind farm's computers. A third attack tool, called Windpoison, used a trick called ARP cache poisoning, which exploits how control systems locate and identify components on a network. Windpoison spoofed those addresses to insert itself as a man-in-the-middle in the operators’ communications with the turbines. That would allow hackers to falsify the signals being sent back from the turbines, hiding disruptive attacks from the operators’ systems.

While the Tulsa researchers shut off only a single turbine at a time in their tests, they point out that their methods could easily paralyze an entire wind farm, cutting off as much as hundreds of megawatts of power.

Wind farms produce a relatively smaller amount of energy than their coal or nuclear equivalents, and grid operators expect them to be less reliable, given their dependence on the real-time ebb and flow of wind currents. That means even taking out a full farm may not dramatically impact the grid overall, says Ben Miller, a researcher at the critical-infrastructure security startup Dragos Inc. and a former engineer at the North American Electric Reliability Council.

More concerning than attacks to stop turbines, Miller says, are those intended to damage them. The equipment is designed for lightness and efficiency, and is often fragile as a result. That, along with the high costs of going even temporarily offline, make the vulnerabilities potentially devastating for a wind farm owner. "It would all probably be far more impactful to the operator of the wind farm than it would be to the grid," Miller says.

Staggs argues that this potential to cause costly downtime for wind farms leaves their owners open to extortion or other kinds of profit-seeking sabotage. "This is just the tip of the iceberg," he says. "Imagine a ransomware scenario."

While the Tulsa researchers were careful not to name any of the manufacturers of the equipment used in the wind farms they tested, WIRED reached out to three major wind farm suppliers for comment on their findings: GE, Siemens Gamesa, and Vestas. GE and Siemens Gamesa didn't respond. But Vestas spokesperson Anders Riis wrote in an email that "Vestas takes cyber security very seriously and continues to work with customers and grid operators to build products and offerings to improve security levels in response to the shifting cyber security landscape and evolving threats." He added that it offers security measures that include "physical breach and intrusion detection and alert; alarm solutions at turbine, plant, and substation level to notify operators of a physical intrusion; and mitigation and control systems that quarantine and limit any malicious impact to the plant level, preventing further impact to the grid or other wind plants.”1

Researchers have demonstrated the vulnerabilities of wind turbines before, albeit on a far smaller scale. In 2015, the US Industrial Control System Computer Emergency Response Team issued a warning about hundreds of wind turbines, known as the XZERES 442SR, whose controls were openly accessible via the internet. But that was a far smaller turbine aimed at residential and small business users, with blades roughly 12 feet in length—not the massive, multimillion-dollar versions the Tulsa researchers tested.

The Tulsa team also didn't attempt to hack its targets over the internet. But Staggs speculates it might be possible to remotely compromise them too—perhaps by infecting the operators' network, or the laptop of one of the technicians who services the turbines. But other hypothetical vulnerabilities pale next to the very real distributed, unprotected nature of turbines themselves, says David Ferlemann, another member of the Tulsa team. "A nuclear power plant is hard to break into," he points out. "Turbines are more distributed. It’s much easier to access one node and compromise the entire fleet."

The researchers suggest that, ultimately, wind farm operators need to build authentication into the internal communications of their control systems—not just isolate them from the internet. And in the meantime, a few stronger locks, fences, and security cameras on the doors of the turbines themselves would make physical attacks far more difficult.

For now, wind farms produce less than 5 percent of America's energy, Staggs says. But as wind power grows as a fraction of US electric generation, he hopes their work can help secure that power source before a large fraction of Americans comes depend on it.

"If you're an attacker bent on trying to influence whether the lights are on or not," says Staggs, "that becomes a more and more attractive target for you to go after."

Solving a sweet problem for renewable biofuels and chemicals

Arizona State University

Whether or not society shakes its addiction to oil and gasoline will depend on a number of profound environmental, geopolitical and societal factors.

But with current oil prices hovering around $50 dollars a barrel, it won't likely be anytime soon.

Despite several major national research initiatives, no one has been able to come up with the breakthrough renewable biofuel technology that would lead to a cheaper alternative to gasoline.

That research challenge led ASU scientists, Reed Cartwright and Xuan Wang, to enter the fray, teaming up to try to break through the innovation bottleneck for the renewable bioproduction of fuels and chemicals.

"My lab has been very interested in converting biomass such as agricultural wastes and even carbon dioxide into useful and renewable bio-based products," said Wang, an assistant professor in the School of Life Sciences. "As a microbiologist, I'm interested in manipulating microbes as biocatalysts to do a better job."

To do so, they've looked into a new approach -- harnessing the trial-and-error power of evolution to coax nature into revealing the answer.

By growing bacteria over generations under specially controlled conditions in fermentation tanks, they have test-tube evolved bacteria to better ferment sugars derived from biomass -- a rich, potential renewable energy source for the production of biofuels and chemicals.

Their results appeared recently in the online edition of PNAS (doi: 10.1073/pnas.1700345114).

The research team includes postdoctoral scholar Christian Sievert, Lizbeth Nieves, Larry Panyon, Taylor Loeffler and Chandler Morris, and was led by Reed Cartwright and Xuan Wang, in a collaboration between the ASU's School of Life Sciences and the Biodesign Institute.

A sweet problem

The appeal of plants is ideal. Just add a little carbon dioxide, water and plentiful sunshine, and presto! Society has a rich new source of renewable carbons to use.

Corn ethanol (using starch from corn for alcohol production primarily in the U.S.) has been one major biofuel avenue, and sugarcane another alternative (abundant in Brazil), but there is a big drawback. Turning the sugar-rich kernels of corn or sugarcane into ethanol competes with the food supply.

So scientists over the past few decades have migrated to research on conversion of non-food based plant materials into biofuels and chemicals. These so-called lignocellulosic biomasses, like tall switchgrasses and the inedible parts of corn and sugarcane (stovers, husks, bagasses, etc.) are rich in xylose, a five-carbon, energy-rich sugar relative of glucose.

Lignocellulosic biomass has an abundance of glucose and xylose, but industrial E coli strains can't use xylose because when glucose is available, it turns off the use of xylose. And so, to date, it's been an inefficient and costly to fully harvest and convert the xylose to biofuels.

Benchtop evolution

Wang and Cartwright wanted to squeeze out more energy from xylose sugars. To do so, they challenged E coli bacteria that could thrive comfortably on glucose -- and switch out the growth medium broth to grow solely on xylose.

The bacteria would be forced to adapt to the new food supply or lose the growth competition.

They started with a single colony of bacteria that were genetically identical and ran three separate evolution experiments with xylose. At first, the bacteria grew very slowly. But remarkable, in no more than 150 generations, the bacteria adapted, and eventually, learned to thrive in the xylose broth.

Next, they isolated the DNA from the bacteria and used next-generation DNA sequencing technology to examine the changes within the bacteria genomes. When they read out the DNA data, they could identify the telltale signs of evolution in action, mutations.

Nature finds a way

The bacteria, when challenged, randomly mutated their DNA until it could adapt to the new conditions. They held on to the fittest mutations over generations until they became fixed beneficial mutations.

And in each case, when challenged with xylose, the bacteria could grow well. Their next task was to find out what these beneficial mutations were and how did they work. To grow better on xylose, the three bacterial E. coli lines had "discovered" a different set of mutations to the same genes. The single mutations the research team identified all could enhance xylose fermentation by changing bacterial sugar metabolism.

"This suggests that there are potentially multiple evolutionary solutions for the same problem, and a bacterium's genetic background may predetermine its evolutionary trajectories," said Cartwright, a researcher at ASU's Biodesign Institute and assistant professor in the School of Life Sciences.

The most interesting mutation happened in a regulatory protein called XylR whose normal function is to control xylose utilization. Just two amino acid switches in the XylR could enhance xylose utilization and release the glucose repression, even in the non-mutated original hosts.

Through some clever genetic tricks, when the XlyR mutant was placed back in a normal "wild-type" strain or an industrial E. coli biocatalyst, it could also now grow on xylose and glucose, vastly improving the yield. Wang's team saw up to a 50 percent increase in the product after 4 days of fermentation.

Together, Wang and Cartwright's invention has now significantly boosted the potential of industrial E. coli to be used for biofuel production from lignocellulosic materials. In addition, they could use this same genetic approach for other E. coli strains for different products.

Arizona Technology Enterprises (AzTE) is filing a non-provisional patent for their discovery. Wang hopes they can partner with industry to scale up their technology and see if this invention will increase economic viability for bioproduction.

"With these new results, I believe we've solved one big, persistent bottleneck in this field," concluded Wang.

Cellulosic biofuels can benefit the environment if managed correctly

Michigan State University

Could cellulosic biofuels - or liquid energy derived from grasses and wood - become a green fuel of the future, providing an environmentally sustainable way of meeting energy needs? In Science, researchers at the U.S. Department of Energy-funded Great Lakes Bioenergy Research Center say yes, but with a few important caveats.

"The climate benefit of cellulosic biofuels is actually much greater than was originally thought," said Phil Robertson, University Distinguished Professor of Ecosystem Science at Michigan State University and lead author on the study. "But that benefit depends crucially on several different factors, all of which we need to understand to get right."

Although not yet a market force, cellulosic biofuels are routinely factored into future climate mitigation scenarios because of their potential to both displace petroleum use and mitigate greenhouse gas emissions. Those benefits, however, are complicated by the need for vast amounts of land to produce cellulosic biofuels on a large scale.

"The sustainability question is largely about the impact of using millions of acres of U.S. land to grow biofuel crops," Robertson said. "Can we do that without threatening global food security, diminishing biodiversity, or reducing groundwater supplies? How much more fertilizer would we use? What are the tradeoffs for real climate benefit, and are there synergies we can promote?"

Drawing from ten years of empirical research, Robertson and GLBRC colleagues from MSU, the University of Wisconsin and the University of Maryland identify several emerging principles for managing the complex environmental tradeoffs of cellulosic biofuel.

First, the researchers show how growing native perennial species on marginal lands -land not used for food production because of low fertility or other reasons - avoids competition with food security, and provides the greatest potential for climate mitigation and biodiversity benefits.

Second, crop choice is key. Native perennial species offer superior environmental outcomes to annual crops, but no single crop appears to be ideal for all locations. In fact, in some cases mixed species crops provide superior benefits. Third, nitrogen fertilizer use should be avoided or minimized because of its global warming and other environmental impacts.

According to the researchers, these principles (as well as four more outlined in the paper) are enough to begin guiding sound policy decisions for producing sustainable biofuels. Looking forward, however, the team calls for more research on designing landscapes to provide the optimal suite of energy, climate and environmental benefits. They say that understanding how best to integrate benefits and tradeoffs will be key to the future success of cellulosic biofuels.

"With biofuels, the stakes are high," Robertson said. "But the returns are also high, and if we take key principles into account we can begin shaping the policies and practices that could help make cellulosic biofuels a triple win for the economy, the climate and for environmental sustainability in general."

Greenland now a major driver of rising seas: study

By Marlowe HOOD for Paris (AFP) June 26, 2017

Ocean levels rose 50 percent faster in 2014 than in 1993, with meltwater from the Greenland ice sheet now supplying 25 percent of total sea level increase compared with just five percent 20 years earlier, researchers reported Monday.

The findings add to growing concern among scientists that the global watermark is climbing more rapidly than forecast only a few years ago, with potentially devastating consequences.

Hundreds of millions of people around the world live in low-lying deltas that are vulnerable, especially when rising seas are combined with land sinking due to depleted water tables, or a lack of ground-forming silt held back by dams.

Major coastal cities are also threatened, while some small island states are already laying plans for the day their drowning nations will no longer be livable.

"This result is important because the Intergovernmental Panel on Climate Change (IPCC)" -- the UN science advisory body -- "makes a very conservative projection of total sea level rise by the end of the century," at 60 to 90 centimetres (24 to 35 inches), said Peter Wadhams, a professor of ocean physics at the University of Oxford who did not take part in the research.

That estimate, he added, assumes that the rate at which ocean levels rise will remain constant.

"Yet there is convincing evidence -- including accelerating losses of mass from Greenland and Antarctica -- that the rate is actually increasing, and increasing exponentially."

Greenland alone contains enough frozen water to lift oceans by about seven metres (23 feet), though experts disagree on the global warming threshold for irreversible melting, and how long that would take once set in motion.

"Most scientists now expect total rise to be well over a metre by the end of the century," Wadhams said.

The new study, published in Nature Climate Change, reconciles for the first time two distinct measurements of sea level rise.

The first looked one-by-one at three contributions: ocean expansion due to warming, changes in the amount of water stored on land, and loss of land-based ice from glaciers and ice sheets in Greenland and Antarctica.

- 'A major warning' -

The second was from satellite altimetry, which gauges heights on the Earth's surface from space.

The technique measures the time taken by a radar pulse to travel from a satellite antenna to the surface, and then back to a satellite receiver.

Up to now, altimetry data showed little change in sea levels over the last two decades, even if other measurements left little doubt that oceans were measurably deepening.

"We corrected for a small but significant bias in the first decade of the satellite record," co-author Xuebin Zhang, a professor at Qingdao National Laboratory of Marine Science and Technology in China's Shandong Province, told AFP.

Overall, the pace of global average sea level rise went up from about 2.2 millimetres a year in 1993, to 3.3 millimetres a year two decades later.

In the early 1990s, they found, thermal expansion accounted for fully half of the added millimetres. Two decades later, that figure was only 30 percent.

Andrew Shepherd, director of the Centre for Polar Observation and Modelling at the University of Leeds in England, urged caution in interpreting the results.

"Even with decades of measurements, it is hard to be sure whether there has been a steady acceleration in the rate of global sea level rise during the satellite era because the change is so small," he said.

Disentangling single sources -- such as the massive chunk of ice atop Greenland -- is even harder.

But other researchers said the study should sound an alarm.

"This is a major warning about the dangers of a sea level rise that will continue for many centuries, even after global warming is stopped," said Brian Hoskins, chair of the Grantham Institute at Imperial College London.

Scientists throw light on mysterious ice age temperature jumps

Cardiff, UK (SPX) Jun 22, 2017

Scientists believe they have discovered the reason behind mysterious changes to the climate that saw temperatures fluctuate by up to 15C within just a few decades during the ice age periods. In a new study, the researchers show that rising levels of CO2 could have reached a tipping point during these glacial periods, triggering a series of chain events that caused temperatures to rise abruptly.

The findings, which have been published in the journal Nature Geoscience, add to mounting evidence suggesting that gradual changes such as a rising CO2 levels can lead to sudden surprises in our climate, which can be triggered when a certain threshold is crossed.

Previous studies have shown that an essential part of the natural variability of our climate during glacial times is the repeated occurrence of abrupt climate transitions, known as Dansgaard-Oeschger events. These events are characterized by drastic temperature changes of up to 15C within a few decades in the high latitudes of the Northern Hemisphere. This was the case during the last glacial period around 100,000 to 20,000 years ago.

It is commonly believed that this was a result of sudden floods of freshwater across the North Atlantic, perhaps as a consequence of melting icebergs.

Co-author of the study Professor Stephen Barker, from Cardiff University's School of Earth and Ocean Sciences, said: "Our results offer an alternative explanation to this phenomenon and show that a gradual rise of CO2 within the atmosphere can hit a tipping point, triggering abrupt temperature shifts that drastically affect the climate across the Northern Hemisphere in a relatively short space of time.

"These findings add to mounting evidence suggesting that there are sweet spots or 'windows of opportunity' within climate space where so-called boundary conditions, such as the level of atmospheric CO2 or the size of continental ice sheets, make abrupt change more likely to occur. Of course, our study looks back in time and the future will be a very different place in terms of ice sheets and CO2 but it remains to be seen whether or not Earth's climate becomes more or less stable as we move forward from here".

Using climate models to understand the physical processes that were at play during the glacial periods, the team were able to show that a gradual rise in CO2 strengthened the trade winds across Central America by inducing an El Nino-like warming pattern with stronger warming in the East Pacific than the Western Atlantic.

As a result there was an increase in moisture transport out of the Atlantic, which effectively increased the salinity and density, of the ocean surfaces, leading to an abrupt increase in circulation strength and temperature rise.

"This does not necessarily mean that a similar response would happen in the future with increasing CO2 levels, since the boundary conditions are different from the ice age," added by Professor Gerrit Lohmann, leader of the Paleoclimate Dynamics group at the Alfred Wegener Institute.

"Nevertheless, our study shows that climate models have the ability of simulating abrupt changes by gradual forcing as seen in paleoclimate data."

Building on this study, the team intend to produce a new reconstruction of global ice volume across the last glacial cycle, which will help to validate their proposition that certain boundaries can define windows of instability within the climate system.

Jury awards $218 mn to farmers in Syngenta GMO corn lawsuit

Chicago (AFP) June 23, 2017

A US federal jury on Friday ordered Swiss agribusiness giant Syngenta to pay nearly $218 million to 7,000 Kansas farmers after selling them genetically-modified corn seeds not approved for export to China.

The farmers suffered profound economic damage in 2013 when Chinese authorities refused imports of corn grown with Syngenta's bioengineered seeds, causing prices to plummet, according to a lawyer for the plaintiffs.

"The verdict is great news for corn farmers in Kansas and corn growers throughout the country who were seriously hurt by Syngenta's actions," Pat Stueve, a lawyer for the farmers, said in a statement.

The jury in Kansas found Syngenta negligent in the matter, and awarded $217.7 million in compensation to the farmers, court papers showed. The company said the case is "without merit."

The verdict came down after only a half day of deliberations, but covers only one of eight lawsuits targeting Syngenta over the matter, Stueve said.

"This is only the beginning. We look forward to pursuing justice for thousands more corn farmers in the months ahead."

Other cases involve farmers in agricultural states such as Nebraska, Iowa, Illinois and Ohio, with nationwide losses exceeding $5 billion, he said.

The company said it was "disappointed" with the verdict and will appeal.

The ruling "will only serve to deny American farmers access to future technologies even when they are fully approved in the US," adding that the two strains of corn seeds in the case had approval of US regulators and "in the key import markets recommended at the time by the National Corn Growers Association (NCGA) and other industry associations."

"American farmers shouldn't have to rely on a foreign government to decide what products they can use on their farms," the company said in a statement.

China's $10 billion strategic project in Myanmar sparks local ire

by Reuters onn Friday, 9 June 2017 09:40 GMT

KYAUK PYU, Myanmar, June 9 (Reuters) - Days before the first supertanker carrying 140,000 tonnes of Chinese-bound crude oil arrived in Myanmar's Kyauk Pyu port, local officials confiscated Nyein Aye's fishing nets.

The 36-year-old fisherman was among hundreds banned from fishing a stretch of water near the entry point for a pipeline that pumps oil 770 km (480 miles) across Myanmar to southwest China and forms a crucial part of Beijing's "Belt and Road" project to deepen its economic links with Asia and beyond.

"How can we make a living if we're not allowed to catch fish?" said Nyein Aye, who bought a bigger boat just four months ago but now says his income has dropped by two-thirds due to a decreased catch resulting from restrictions on when and where he can fish. Last month he joined more than 100 people in a protest demanding compensation from pipeline operator Petrochina .

The pipeline is part of the nearly $10 billion Kyauk Pyu Special Economic Zone, a scheme at the heart of fast-warming Myanmar-China relations and whose success is crucial for the Southeast Asian nation's leader Aung San Suu Kyi.

Embattled Suu Kyi needs a big economic win to stem criticism that her first year in office has seen little progress on reform. China's support is also key to stabilising their shared border, where a spike in fighting with ethnic armed groups threatens the peace process Suu Kyi says is her top priority.

China's state-run CITIC Group, the main developer of the Kyauk Pyu Special Economic Zone, says it will create 100,000 jobs in the northwestern state of Rakhine, one of Myanmar's poorest regions.

But many local people say the project is being rushed through without consultation or regard for their way of life.

Suspicion of China runs deep in Myanmar, and public hostility due to environmental and other concerns has delayed or derailed Chinese mega-projects in the country in the past.

China says the Kyauk Pyu development is based on "win-win" co-operation between the two countries.

Since Beijing signalled it may abandon the huge Myitsone Dam hydroelectric project in Myanmar earlier this year, it has pushed for concessions on other strategic undertakings - including the Bay of Bengal port at Kyauk Pyu, which gives it an alternative route for energy imports from the Middle East.

Internal planning documents reviewed by Reuters and more than two dozen interviews with officials show work on contracts and land acquisition has already begun before the completion of studies on the impact on local people and the environment, which legal experts said could breach development laws.

The Kyauk Pyu Special Economic Zone will cover more than 4,200 acres (17 sq km). It includes the $7.3 billion deep sea port and a $2.3 billion industrial park, with plans to attract industries such as textiles and oil refining.

A Reuters' tally based on internal planning documents and census data suggests 20,000 villagers, most of whom now depend on agriculture and fishing, are at risk of being relocated to make way for the project.

"There will be a huge project in the zone and many buildings will be built, so people who live in the area will be relocated," said Than Htut Oo, administrator of Kyauk Pyu, who also sits on the management committee of the economic zone.

He said the government has not publicly announced the plan, because it didn't want to "create panic" while it was still negotiating with the Chinese developer.

In April, Myanmar's President Htin Kyaw signed two agreements on the pipeline and the Kyauk Pyu port with his Chinese counterpart Xi Jinping, as Beijing pushed to revive a project that had stalled since its inception in 2009.

The agreements call for environmental and social assessments to be carried out as soon as possible.

While the studies are expected to take up to 15 months and have not yet started, CITIC has asked Myanmar to finalise contract terms by the end of this year so that the construction can start in 2018, said Soe Win, who leads the Myanmar management committee of the zone.

Such a schedule has alarmed experts who fear the project is being rushed.

"The environmental and social preparations for a project of these dimensions take years to complete and not months," said Vicky Bowman, head of the Myanmar Centre for Responsible Business and a former British ambassador to the country.

CITIC said in an email to Reuters it would engage "a world-renowned consulting firm" to carry out assessments.

Although large-scale land demarcation for the project has not yet started, 26 families have been displaced from farmland due to acquisitions that took place in 2014 for the construction of two dams, according to land documents and the land owners.

Experts say this violates Myanmar's environmental laws.

"Carrying out land acquisition before completing environmental impact assessments and resettlement plans is incompatible with national law," said Sean Bain, Myanmar-based legal consultant for human rights watchdog International Commission of Jurists.

CITIC says it will build a vocational school to provide training for skills needed by companies in the economic zone. It has given $1.5 million to local villages to develop businesses.

Reuters spoke to several villagers who had borrowed small sums from the village funds set up with this money.

"The CITIC money was very useful for us because most people in the village need money," said fisherman Thar Sai Aung, who borrowed $66 to buy new nets.

Chinese investors say they also plan to spend $1 million during the first five years of the development, and $500,000 per year thereafter to improve local living standards.

But villagers in Kyauk Pyu say they fear the project would not contribute to the development of the area because the operating companies employ mostly Chinese workers.

From more than 3,000 people living on the Maday island, the entry point for the oil pipeline, only 47 have landed a job with the Petrochina, while the number of Chinese workers stood at more than double that number, data from labour authorities showed.

Petrochina did not respond to requests for comment. In a recent report it said Myanmar citizens made up 72 percent of its workforce in the country overall and it would continue to hire locally.

"I don't think there's hope for me to get a job at the zone," said fisherman Nyein Aye. He had been turned down 12 times for job applications with the pipeline operator.

"Chinese companies said they would develop our village and improve our livelihoods, but it turned out we are suffering every day."

Ambiguous pledges leave large uncertainty under Paris climate agreement

International Institute for Applied Systems Analysis

Under the pledges made by countries under the Paris Agreement on climate change, greenhouse gas emissions could range from 47 to 63 billion metric tons of CO2 equivalent (GtCO2e) per year in 2030, compared to about 52 GtCO2e in 2015, according to a new analysis. That range has critical consequences for the feasibility of achieving the goal of keeping warming "well below 2°C" over preindustrial levels, according to the study published in the journal Nature Communications.

The pledges, known as National Determined Contributions (NDCs) lay out a roadmap of how individual countries will reduce their emissions, with the intention of adding up to a global emissions reduction sufficient to achieve the Paris targets. Yet the new study shows that these individual maps leave out key details that would enable policymakers to see if they are headed in the right direction.

"Countries have put forward pledges to limit and reduce their emissions. But in many cases the actions described in these pledges are ambiguous or imprecise," says IIASA researcher Joeri Rogelj, who led the study. For example, some pledges focus on improving "emissions intensity," meaning reducing the emissions per dollar of economic output, but assumptions about socioeconomic growth are often implicit or unknown. Other countries focus on absolute emissions reductions, which are simpler to understand, or propose renewable energy targets, which can be expressed in different ways. Questions also remain about how much land-use-related climate mitigation will contribute, such as reducing deforestation or preserving forests.

The study finds that the emissions implied by the current NDCs can vary by -10 to +20% around the median estimate of 52 GtCO2e/yr in 2030. A previous study, also led by IIASA, had found that that the emissions reductions set out in the NDCs would not put the world on track to achieve the Paris targets.

The new study confirms this finding. It shows in a quantitative way that in order to keep warming to below 2°C, countries should either increase the stringency of their NDCs by 2030 or consider scaling up their ambition after 2030 by a factor 4 to 25. If the ambition of NDCs is not further increased by 2030, the study finds no pathways for returning warming to 1.5°C by the end of the century.

"The new results allow us to more precisely understand what is driving the uncertainty in emissions estimates implied by the Paris pledges," says Rogelj. "With this information at hand, policymakers can formulate solutions to remediate this issue."

"This is the first global study to systematically explore the range of emissions outcomes under the current pledges. Our study allows us to identify the key contributors to the overall uncertainty as well as potential clarifications by countries that would be most promising to reduce the overall uncertainty," says IIASA Energy Program Director Keywan Riahi, a study coauthor.

The researchers find that uncertainty could be reduced by 10% with simple, technical clarifications, and could be further reduced by clearer guidelines for countries on building their NDCs. The study highlights the importance of a thorough and robust tracking process of progress made by countries towards the achievement of their NDCs and the Paris Agreement goals as a whole.

Reference

Rogelj J, Fricko O, Meinshausen M, Krey V, Zilliacus JJJ, Riahi K (2017). Understanding the origin of Paris Agreement emission uncertainties. Nature Communications. [doi: 10.1038/ncomms15748]

Offshore wind turbines vulnerable to Category 5 hurricane gusts

University of Colorado at Boulder

Offshore wind turbines built according to current standards may not be able to withstand the powerful gusts of a Category 5 hurricane, creating potential risk for any such turbines built in hurricane-prone areas, new University of Colorado Boulder-led research shows.

The study, which was conducted in collaboration with the National Center for Atmospheric Research in Boulder, Colorado and the U.S. Department of Energy's National Renewable Energy Laboratory in Golden, Colorado, highlights the limitations of current turbine design and could provide guidance for manufacturers and engineers looking to build more hurricane-resilient turbines in the future.

Offshore wind-energy development in the U.S. has ramped up in recent years, with projects either under consideration or already underway in most Atlantic coastal states from Maine to the Carolinas, as well as the West Coast and Great Lakes. The country's first utility-scale offshore wind farm, consisting of five turbines, began commercial operation in December 2016 off the coast of Rhode Island.

Turbine design standards are governed by the International Electrotechnical Commission (IEC). For offshore turbines, no specific guidelines for hurricane-force winds exist. Offshore turbines can be built larger than land-based turbines, however, owing to a manufacturer's ability to transport larger molded components such as blades via freighter rather than over land by rail or truck.

For the study, CU Boulder researchers set out to test the limits of the existing design standard. Due to a lack of observational data across the height of a wind turbine, they instead used large-eddy simulations to create a powerful hurricane with a computer.

"We wanted to understand the worst-case scenario for offshore wind turbines, and for hurricanes, that's a Category 5," said Rochelle Worsnop, a graduate researcher in CU Boulder's Department of Atmospheric and Oceanic Sciences (ATOC) and lead author of the study.

These uniquely high-resolution simulations showed that under Category 5 conditions, mean wind speeds near the storm's eyewall reached 90 meters-per-second, well in excess of the 50 meters-per-second threshold set by current standards.

"Wind speeds of this magnitude have been observed in hurricanes before, but in only a few cases, and these observations are often questioned because of the hazardous conditions and limitations of instruments," said George Bryan of NCAR and a co-author of the study. "By using large-eddy simulations, we are able to show how such winds can develop and where they occur within hurricanes."

Furthermore, current standards do not account for veer, a measure of the change in wind direction across a vertical span. In the simulation, wind direction changed by as much as 55 degrees between the tip of the rotor and its hub, creating a potentially dangerous strain on the blade.

The findings could be used to help wind farm developers improve design standards as well as to help stakeholders make informed decisions about the costs, benefits and risks of placing turbines in hurricane-prone areas.

"The study will help inform design choices before offshore wind energy development ramps up in hurricane-prone regions," said Worsnop, who received funding from the National Science Foundation Graduate Research Fellowship Program to conduct this research. "We hope that this research will aid wind turbine manufacturers and developers in successfully tapping into the incredibly powerful wind resource just beyond our coastlines."

"Success could mean either building turbines that can survive these extreme conditions, or by understanding the overall risk so that risks can be mitigated, perhaps with financial instruments like insurance," said Professor Julie Lundquist of ATOC and CU Boulder's Renewable and Sustainable Energy Institute (RASEI), a co-author of the study. "The next stage of this work would be to assess how often these extreme winds would impact an offshore wind farm on the Atlantic coast over the 20-to-30-year lifetime of a typical wind farm."

The findings were recently published online in the journal Geophysical Research Letters, a publication of the American Geophysical Union.

Climate Change Misconceptions Common Among Teachers, Study Finds

June 07, 2017, Nathan Hurst, hurstn@missouri.edu, 573-882-6217

COLUMBIA, Mo. – Recent studies have shown that misconceptions about climate change and the scientific studies that have addressed climate change are pervasive among the U.S. public. Now, a new study by Benjamin Herman, assistant professor in the Department of Learning, Teaching and Curriculum in the University of Missouri College of Education, shows that many secondary school science teachers also possess several of these same misconceptions.

In the study, Herman surveyed 220 secondary science teachers in Florida and Puerto Rico to determine their knowledge about climate change science. The survey asked questions regarding things that do contribute to climate change, such as greenhouse gas emissions, and things that do not significantly contribute, such as the depletion of the ozone layer and the use of pesticides. The survey also asked whether controlled scientific experiments are required to validate climate change.

While the majority of the surveyed teachers accurately responded that fossil fuel use, automobiles and industry emissions were major causes of climate change, they also exhibited notable climate change misconceptions. For instance, nearly all of the Puerto Rico teachers and more than 70 percent of Florida teachers believed incorrectly that ozone layer depletion and pesticide use were at least minor, yet significant, causes of climate change. Additionally, Herman says that nearly 50 percent of Florida teachers and nearly 70 percent of Puerto Rico teachers think that climate change science must be studied through controlled experiments to be valid.

Herman says the teachers in his study exhibited climate change science misconceptions at a similar rate to average Americans. He says these results are understandable given that teachers are often overworked and not afforded professional development opportunities that would deepen their climate change science knowledge.

“Teachers want and need support to keep them abreast of scientific discoveries and developments and how scientists come to their well-established claims regarding climate change,” Herman said. “Climate change science involves many different types of science methods stemming from disciplines, including physics, biology, atmospheric science and earth science. Science teachers also need professional development directed at assisting them in their efforts to accurately and effectively engage students on this important issue. Because of existing misconceptions and misinformation regarding climate change, science teachers have a crucial professional and ethical responsibility to accurately convey to their students how climate change is studied and why scientists believe the climate is changing.”

The study, “Florida and Puerto Rico Secondary Science Teachers’ Knowledge and Teaching of Climate Change Science,” was published in the International Journal of Science and Mathematics Education. The study was coauthored by Allan Feldman and Vanessa Vernaza-Hernandez from the University of South Florida. The study was funded by a National Science Foundation Coastal Areas Climate Change Education Partnership Award grant.

Jobs in Wind Energy in 2016

Published: March 29th, 2017 By Climate Central

We’ve reached the end of the windiest month of the year. But in other months, wind will continue to play an increasingly large role in the U.S. power mix. At the end of last year, wind capacity surpassed hydroelectric capacity for the first time in the U.S.

Over the past decade, wind power has exploded in the U.S. Over that time, generating capacity from wind has increased by a factor of seven, surpassing 82,000 megawatts, or enough to power 24 million homes.

Wind is most consistent in the Great Plains and on the Front Range of the Rockies. The prevailing west winds come rushing off of the mountains, making the area especially conducive for generating electricity, as higher wind speeds produce disproportionately more power from a wind turbine.

The winds in the Southeast are not as consistent, especially in the doldrums of summer. Despite the lack of a consistent breeze across the region as a whole, the Southeast coast and offshore areas in particular allow for a modest amount of power generation.

Wind power is about more than electricity. The rapid installation of wind turbines has spurred job growth in the U.S.:

More than 100,000 jobs are related to the wind energy sector (planning, construction, operations)

40 states now have utility-scale wind energy projects

Texas leads the country in the number of jobs in the wind energy sector

The power provided by wind also means that the need for fossil fuels is declining, effectively slowing the increase of heat-trapping carbon dioxide emitted into the atmosphere.

Methodology: Windiest locations are based on a height of 80 meters (260 feet), typical tower height of wind turbines. Data provided by AWS Truepower via National Renewable Energy Laboratory. Job data provided by American Wind Energy Association for 2016. Carbon dioxide savings based on total installed wind capacity for each state, an operating capacity factor of 35 percent, and estimated carbon dioxide emissions avoided per unit of wind electricity in each state.

Organisms in ballast water increasing despite discharge measures

New actions contemplated as larger ships, changes in traffic patterns may be offsetting current procedures

By Karl Blankenship on June 06, 2017

Ships arriving in Chesapeake Bay ports bring more than just cargo — in 2013 they also inadvertently released an estimated 10 billion live zooplankton from other parts of the world, a finding that surprised the researchers who recently reported the results.

Regulations aimed at reducing the risk of aquatic invasions went into effect more than a decade ago, and a team from the Smithsonian Environmental Research Center had expected to see a decrease in live organisms being released from ballast holds of ships.

Instead, they found that concentrations of coastal zooplankton discharged into Bay waters had increased nearly fivefold from releases checked before the new regulations took effect in 2004.

“It wasn’t quite what we expected to see,” said Greg Ruiz, SERC senior marine biologist and a co-author of the study. “We wanted to know how things had changed, and they changed in a way that we didn’t expect.”

So-called “biological pollution” from ships has been a concern since the 1980s, when releases of ballast water led to the devastating invasion of zebra mussels in the Great Lakes, causing billions of dollars in damages, harm to water quality and alterations to the lakes’ food chain.

Ballast water has also been blamed for introducing the Chinese mitten crab, the rapa whelk and other exotic species into the Chesapeake Bay. An Asian strain of Vibrio bacteria, which sickened two people who ate raw Bay oysters in 2010, is suspected to have arrived in a ship’s ballast tank.

Large ships often suck huge amounts of water into large ballast tanks when leaving a port to help stabilize the vessels during their voyage. A single ship can often hold 30,000–40,000 cubic meters of ballast water.

Until about a decade ago, that ballast water — along with any organisms in it — was routinely discharged into the water at destination ports. Starting in 2004, the Coast Guard required ships to exchange ballast water while they are more than 200 nautical miles offshore, replacing water drawn in at ports with ocean water. That was supposed to reduce the density of coastal species in the ballast tanks and replace them with oceanic species less likely to survive in the fresher water of ports.

But Smithsonian scientists were shocked when they examined ballast water collected in recent years from ships arriving in the ports of Norfolk and Baltimore and found that coastal zooplankton concentrations had dramatically increased when compared with samples gathered from 1993 to 2000, before ballast water exchange was required.

The scientists, who reported their findings in the journal PLOS ONE, say that several factors likely contributed to the increases, including the fact that the total amount of ballast water being discharged increased nearly fivefold during that time.

While overall shipping traffic has remained fairly steady in the Bay in recent decades, the amount of traffic by “bulkers,” primarily transporting coal, has increased steadily since 2000, figures in the study show.

Container ships both pick up and drop off cargo, and therefore often do not have to take on as much ballast. But bulkers leave the Bay full, typically taking coal to other countries, and return empty. That means they must take on large amounts of ballast water for their return voyage to stabilize the empty vessel, and most of that gets released when the ship returns to pick up another load.

From 2005 through 2013, the amount of ballast water being discharged into the Bay increased 374 percent, the scientists said. The increase was driven by bulkers, which accounted for more than three-quarters of the total ballast discharge in the final years of the study.

As a result, the effectiveness of the ballast exchange requirement was reduced simply because significantly more water was being released.

But the scientists said that alone did not account for the overall increase in zooplankton concentrations, which are supposed to be greatly diluted by ballast water exchanges.

They suggested several other factors may be playing a role. Zooplankton concentrations have been poorly studied in other areas, the Smithsonian team said, and may have increased in ports where Bay-bound ships drew in water, especially because the origins of those ships changed over time. In the earlier years of the study, almost all of the ships arriving in the Bay came from the Eastern Mediterranean, but none of the vessels calling in more recent years originated there, as shipping patterns have changed.

Also, the length of voyages has decreased as ships have become faster, increasing the chance that zooplankton taken on in the port of origin would survive the trip.

And while all ships reported conducting ballast water exchanges, the paper noted that it is impossible to verify how well those were performed. Highly effective ballast water exchanges can eliminate 90 percent of the organisms from the port of origin, but the scientists said there’s no way of knowing whether the ships achieved that level of efficiency.

Most likely, the Smithsonian team concluded, all of those factors may have contributed to the increase. “There are so many interacting factors, there is no way to pinpoint one,” said Jenny Carney, the lead author on the paper.

The findings don’t mean that ballast water exchange does not help. “The zooplankton have increased surprisingly, but it would be even higher if we didn’t have ballast exchange,” Ruiz said.

But, the scientists said, research shows how shifts in trade and trade patterns can diminish the effectiveness of efforts to reduce the risk of biological invasions through ballast water exchange.

This has implications for the future, the authors said. While the demand for coal has dropped in the past few years, other factors could lead to the importation of more ballast water. A recent expansion of the Panama Canal has led to a new generation of even larger shipping vessels, the authors noted, and the ports of Baltimore and Norfolk are among the few along the Atlantic Coast capable of handling those ships. The expected opening late this year of a new terminal in Maryland to export liquified natural gas could also lead to more shipping and ballast water.

“All of these factors combined can lead to highly dynamic changes in [ballast water] delivery to Chesapeake Bay,” the scientists wrote.

That could increase the risk of further invasions by nonnative species in the Bay, the scientists said.

“People should be concerned about this,” Ruiz said. “We know that invasions are a really major force of change, ecologically, economically and in terms of health, too. We see that with invasions around the world, including in Chesapeake Bay.”

Ballast water is considered to be the primary mechanism through which nonnative species colonize coastal waters around the globe. Zebra mussels, and the closely related quagga mussel, may be the poster children for the impact such invasions can cause. The filter-feeding mussels latch onto solid substrates in such huge numbers that they clog water intakes and have sunk navigational buoys.

They have spread avian botulism, which has killed thousands of birds in the Great Lakes, and they have caused the near extinction of some native mussel species. They filter algae from water, but are selective in the types that they consume, which has resulted in the overconsumption of some algae sought by fish — leading to the collapse of some populations — while allowing algae that contribute to poor water quality to persist.

They also accumulate toxic contaminants, which are passed up the food chain when the mussels are consumed by other species. They are blamed for more than $7 billion in damage to the Great Lakes fisheries alone, and have been spreading across the country. They reached the upper Chesapeake in the last few years, probably by attaching to recreational boats, though it’s unclear whether they will thrive in the Bay’s saltier environment.

But the Bay also has suspected ballast water invaders of its own, including the rapa whelk, a native of the western Pacific Ocean, which consumes native mollusks. It was first discovered in the Virginia portion of the Bay in the 1990s and appears to now be established.

The Chinese mitten crab, which was confirmed to be in Maryland’s portion of the Bay in 2005, has become an economic and ecological problem in other areas where it has become established and abundant.

Although coastal waters, including the Bay, are constantly subjected to new species from ballast water, it’s difficult to predict which will survive and potentially become problems. While those species thought to have entered the Bay through ballast water have not caused ecosystem-altering changes like zebra mussels in the Great Lakes, scientists note that some species may remain at low levels for years, even decades, until the right conditions allow their populations to explode.

“We are not saying that all species delivered in ballast tanks are going to have major impacts, but some subset of them can and will, and we see good evidence for that,” Ruiz said. “So at some point, with business as usual, one of those species will have a pretty big effect” in the Bay.

The risk might be reduced, over time, by new regulations that will be phased in starting this fall. Instead of relying on ballast water exchange, those rules require that ballast water to be treated with approved technologies — typically chlorination or exposure to ultraviolet light — to greatly reduce the number of live organisms that could be discharged.

But it will be years before those technologies are on all new ships and existing ships. Also, they may not perform as well in the real world as they do in tests, said Mario Tamburri, of the University of Maryland Center for Environmental Science, who has been working on the new techniques.

“It is still going to remove more organisms than just doing an exchange or doing nothing,” Tamburri said. “Whether it is actually meeting the intent of the regulations, the discharge standard, that’s what’s questionable.”

Karl Blankenship is editor of the Bay Journal and executive director of Chesapeake Media Service. He has served as editor of the Bay Journal since its inception in 1991

Environmental groups try to block Montana mine expansion

By Matt Volz, Associated Press | Posted Jun 8th, 2017 @ 5:45pm

HELENA, Mont. (AP) — Environmental advocacy groups launched a new attempt Thursday to halt the expansion of Montana's largest coal mine over its effects on climate change, after federal officials said it wouldn't contribute significantly to the nation's greenhouse gas emissions.

WildEarth Guardians and the Montana Environmental Information Center say in their lawsuit that the U.S. Interior Department's environmental review was shoddy, and argue the federal official who authorized the Spring Creek Mine expansion didn't have the authority to do so.

"From our perspective, they fell miserably short in accounting for the environmental implications of rubber-stamping more coal mining," said Jeremy Nichols, WildEarth Guardians' climate and energy program director. "It's scandalous that the coal industry seems able to survive only because of the federal government cutting corners and turning its back on climate change."

The Spring Creek Mine is the seventh-largest coal mine in the nation, and the largest in Montana. It is part of the Powder River Basin of Montana and Wyoming, which produces 40 percent of the nation's coal annually. The coal in the 1.7 square mile (4.4 square kilometer) expansion area, like most of the coal mined in the area, comes from publicly owned reserves leased to companies by the federal government.

The expansion could double Cloud Peak Energy's production of 18 million tons of coal a year from Spring Creek, according to the lawsuit. That coal would spew millions of tons of carbon dioxide into the air, which would contribute to climate change and air pollution, the environmental groups argue.

It would also mean an increase in train cars carrying the coal through towns and cities on their way to market, which the environmental review didn't consider, according to the lawsuit.

Interior Department spokeswoman Heather Swift referred questions about the lawsuit to the U.S. Department of Justice. That agency did not return a request for comment.

WildEarth Guardians led a previous attempt to block the expansion when it was first proposed in 2012. In that case, U.S. District Judge Susan Watters of Billings ordered federal officials to re-examine the environmental impacts, but did not halt the expansion.

Last fall, Interior Department officials determined the expansion would have only a minor impact on the nation's greenhouse gas emissions. That prompted the new lawsuit, which argues a more thorough environmental review is needed.

This time, the groups added a new argument to their environmental claims. They say the acting field office manager who approved the expansion did not have the authority to do so.

The groups cited recent decisions that set aside approved coal leases in Wyoming and Colorado because lower-level officials had signed off on them.

In those cases, the Interior board of Land Appeals ruled that coal lease modifications could only be approved by higher-ranking officials, such as the BLM's deputy state director.

Federal Judge Denies Trump Admin Appeal In Youth Climate Lawsuit

"A federal judge has denied the Trump administration's appeal in a climate change lawsuit, paving the way for the unprecedented suit to go to trial.

The case -- Juliana v. United States -- pits a group of youth climate plaintiffs against the federal government and the fossil fuel industry. The plaintiffs allege that the federal government, through its actions and coordination with the fossil fuel industry, have violated their constitutional right to a livable climate. It is the first climate lawsuit to rely on a version of the public trust doctrine -- known as atmospheric trust -- to make its case, and adds to a growing number of attempts to force climate action through the judicial branch.

The lawsuit was initially filed in August of 2015, against the Obama administration. The Obama administration, as well as three fossil fuel industry groups as intervenors, all filed motions to have the lawsuit dismissed, which was denied in November by U.S. Federal Judge Ann Aiken. In February, after President Donald Trump was sworn in, the youth plaintiffs filed notice with the court that they would be replacing Obama with Trump."

EPA delays chemical safety rule until 2019 while evaluates necessity

By Devin Henry - 06/12/17 11:36 AM EDT

The Environmental Protection Agency (EPA) will delay implementation of an Obama-era chemical safety rule for nearly two years while it reassesses the necessity of the regulation.

The EPA announced on Monday that Administrator Scott Pruitt signed a directive last Friday delaying the chemical plant safety standards until at least Feb. 20, 2019.

The move comes after the EPA delayed the regulation in March amid discussions over the rule’s impact on businesses.

“We are seeking additional time to review the program, so that we can fully evaluate the public comments raised by multiple petitioners and consider other issues that may benefit from additional public input," Pruitt said in a statement.

Obama regulators in December finalized a rule beefing up safety standards at chemical production plants, calling for new emergency requirements for manufacturers regulated by the EPA.

Officials moved to overhaul chemical safety standards after a 2013 explosion at a chemical plant in Texas killed 15 people. Their rule would require companies to better prepare for accidents and expand the EPA's investigative and auditing powers.

But chemical companies wrote in a letter to Pruitt shortly after his February confirmation that the rule would raise “significant security concerns and compliance issues that will cause irreparable harm."

The EPA sought public comment in March on a proposal to delay the rule while considering those objections. The agency said it received 54,117 comments before Pruitt formally moved to delay the rule.

Pollution 'devastating' China's vital ecosystem, research shows

The startling extent to which man-made pollution is devastating China's vital ecosystem's ability to offset damaging carbon emissions has been revealed.

UNIVERSITY OF EXETER

The startling extent to which man-made pollution is devastating China's vital ecosystem's ability to offset damaging carbon emissions has been revealed.

A pioneering new international study, led by the University of Exeter, has looked at the true impact air pollutants have in impeding the local vegetation's ability to absorb and store carbon from the atmosphere.

The study looked at the combined effects that surface ozone and aerosol particles - two of the primary atmospheric pollutants linked to public health and climate change - have on China's plant communities' ability to act as a carbon sink.

It found that ozone vegetation damage - which weakens leaf photosynthesis by oxidizing plant cells - far outweighs any positive impact aerosol particles may have in promoting carbon uptake by scattering sunlight and cooling temperatures.

While the damage caused to these vital ecosystems in China is not irreversible, the team of experts has warned that only drastic action will offer protection against long-term global warming.

The study is published in the journal Atmospheric Chemistry and Physics.

Professor Nadine Unger, from the University of Exeter's Mathematics department and co-author of the paper said: "We know that China suffers from the highest levels of air pollution in the world, and the adverse effects this has on human health and climate change are well documented.

"What is less clearly understood, however, is the impact it has on the regional carbon balance. The land ecosystems in China are thought to provide a natural carbon sink, but we didn't know whether air pollution inhibited or promoted carbon uptake.

"What is clear from this study is that the negative ozone vegetation damage far outstrips any benefits that an increase in aerosol particles may have. It is a stark warning that action needs to be taken now to tackle the effects man-made pollution is having on this part of the world before it is too late."

The team used state-of-the-art Earth System computer models, together with a vast array of existing measurement datasets, to assess the separate and combined effects of man-made ozone and aerosol pollution in Eastern China.

The study found that the Net Primary Productivity (NPP) - or the amount of carbon plants in an ecosystem can take in - is significantly reduced when the amount of surface ozone increases.

Crucially, this reduction is significantly greater than the effect aerosol particles have in encouraging plants to increase carbon intake through reducing canopy temperatures and increasing the scattering of light.

Professor Unger added: "Essentially, our results reveal a strong 'dampening effect' of air pollution on the land carbon uptake in China today.

"This is significant for a number of reasons, not least because the increase in surface ozone produced by man-made pollution in the region will continue to grow over the next 15 years unless something is done.

"If - and it is of course a big 'if' - China reduce their pollution to the maximum levels, we could reduce the amount of damage to the ecosystems by up to 70 per cent - offering protection of this critical ecosystem service and the mitigation of long-term global warming."

Cold conversion of food waste into renewable energy and fertilizer

A Concordia study shows bacteria in low-temperature environments could reduce Canada's carbon footprint

PUBLIC RELEASE: 31-MAY-2017

Researchers from Concordia's Department of Building, Civil and Environmental Engineering (BCEE) in collaboration with Bio-Terre Systems Inc. are taking the fight against global warming to colder climes.

Their weapon of choice? Cold-loving bacteria.

In a study published in Process Safety and Environmental Protection, authors Rajinikanth Rajagopal, David Bellavance and Mohammad Saifur Rahaman demonstrate the viability of using anaerobic digestion in a low-temperature (20°C) environment to convert solid food waste into renewable energy and organic fertilizer.

They employed psychrophilic bacteria -- which thrive in relatively low temperatures -- to break down food waste in a specially designed bioreactor. In doing so, they produced a specific methane yield comparable to that of more energy-intensive anaerobic digestion processes.

"There is enormous potential here to reduce the amount of fuel that we use for solid waste treatment," Rahaman explains.

"Managing and treating food waste is a global challenge, particularly for cold countries like Canada where the temperature often falls below -20°C and energy demands related to heating are high."

He adds that the most commonly used forms of anaerobic digestion require large amounts of energy to heat the bioreactors and maintain temperatures for the bacteria's optimal performance.

"What we've learned is that we can now use adapted psychrophilic bacteria to produce a level of methane comparable to those more common forms, while using less energy."

'A promising new research direction'

Globally, more than 1.3 billion tonnes of municipal waste are created each year, and that number is expected to increase to 2.2 billion by 2025. Most of it ends up in landfills where it biodegrades over time, producing biogas, a powerful greenhouse gas largely composed of carbon dioxide, methane and hydrogen sulfide.

Left alone, this methane-rich biogas poses a significant climate threat, as methane carries a global warming potential that is 21 times greater than that of carbon dioxide.

But, according to the researchers, engineered anaerobic digestion techniques can also be adapted to capture such gases and transform them into renewable energy.

By employing devices such as biogas storage domes, biofilters or combined heat and power co-generation systems, for instance, methane can be collected, cleaned and converted into heat or electricity that can then be substituted for most fossil fuels.

At an agronomic level, the process also contributes leftover nitrogen- and phosphorus-rich digestate material that can be subsequently recovered and used as plant fertilizer.

The process for feeding the bioreactor is unique. It involves a semi-continuously fed constant volume overflow approach: the amount of food waste fed into the bottom opening necessitates the removal of an equal amount of treated effluent from the top.

The researchers performed various tests on the extracted material to determine its physicochemical characteristics as well as to monitor the biogas quality and quantity.

"There aren't many studies that look into developing new applications for treating food waste," Rajagopal says. "We hope that this study will mark the beginning of a promising new research direction."

Why Don’t Green Buildings Live Up to Hype on Energy Efficiency?

Analysts call it the “energy performance gap” — the difference between promised energy savings in green buildings and the actual savings delivered. The problem, researchers say, is inept modeling systems that fail to capture how buildings really work.

By Richard Conniff • May 25, 2017

Not long ago in the southwest of England, a local community set out to replace a 1960s-vintage school with a new building using triple-pane windows and super-insulated walls to achieve the highest possible energy efficiency. The new school proudly opened on the same site as the old one, with the same number of students, and the same head person—and was soon burning more energy in a month than the old building had in a year.

The underfloor heating system in the new building was so badly designed that the windows automatically opened to dump heat several times a day even in winter. A camera in the parking lot somehow got wired as if it were a thermal sensor, and put out a call for energy any time anything passed in front of the lens. It was “a catalogue of disasters,” according to David Coley, a University of Bath specialist who came in to investigate.

Many of the disasters were traceable to the building energy model, a software simulation of energy use that is a critical step in designing any building intended to be green. Among other errors, the designers had extrapolated their plan from a simplified model of an isolated classroom set in a flat landscape, with full sun for much of the day. That dictated window tinting and shading to reduce solar gain. Nobody seems to have noticed that the new school actually stood in a valley surrounded by shade trees and needed all the solar gain it could get. The classrooms were so dark the lights had to be on all day.

It was an extreme case. But it was also a good example, according to Coley, of how overly optimistic energy modeling helps cause the “energy performance gap,” a problem that has become frustratingly familiar in green building projects. The performance gap refers to the failure of energy improvements, often undertaken at great expense, to deliver some (or occasionally all) of the promised savings. A study last year of refurbished apartment buildings in Germany, for instance, found that they missed the predicted energy savings by anywhere from 5 to 28 percent. In Britain, an evaluation of 50 “leading-edge modern buildings,” from supermarkets to health care centers, reported that they “were routinely using up to 3.5 times more energy than their design had allowed for” — and producing on average 3.8 times the predicted carbon emissions

Buildings account for 40 percent of climate change emissions and are the fastest growing source of emissions.

The performance gap is “a vast, terrible enormous problem,” in the words of one building technology specialist, and that’s not an exaggeration. Though much of the public concern about energy consumption and climate change focuses on automotive miles-per-gallon, the entire transport sector — including trains, planes, ships, trucks, and cars — accounts for just 26 percent of U.S. climate change emissions. Buildings come in at 40 percent, and they are the fastest growing source of emissions, according to the U.S. Green Building Council.

Eliminating the performance gap matters particularly for European Union nations, which have a legally binding commitment to reduce emissions by 80 to 95 percent below 1990 levels by mid-century. But knowing with confidence what savings will result matters for anybody trying to figure out how much to invest in a particular energy improvement.

Researchers have generally blamed the performance gap on careless work by builders, overly complicated energy-saving technology, or the bad behaviors of the eventual occupants of a building. But in a new study, Coley and his co-authors put much of the blame on inept energy modeling. The title of the study asks the provocative question “Are Modelers Literate?” Even more provocatively, a press release from the University of Bath likens the misleading claims about building energy performance to the Volkswagen emissions scandal, in which actual emissions from diesel engine cars were up to 40 times higher than “the performance promised by the car manufacturer.”

For their study, Coley and his co-authors surveyed 108 building industry professionals — architects, engineers, and energy consultants — who routinely use energy performance models. To keep the problem simple, the researchers asked participants to look at a typical British semi-detached home recently updated to meet current building codes. Then they asked test subjects to rank which improvements made the most difference to energy performance. Their answers had little correlation with objective reality, as determined by a study monitoring the actual energy performance of that home hour-by-hour over the course of a year. A quarter of the test subjects made judgments “that appeared worse than a person responding at random,” according to the study, which concluded that the sample of modelers, “and by implication the population of building modelers, cannot be considered modeling literate.”

‘We have cases where modelers will come up with a savings measure that is more than the energy use of the house,’ says one scientist.

Predictably, that conclusion raised hackles. “The sample seems odd to me,” said Evan Mills, a building technology specialist at Lawrence Berkeley National Laboratory, “to include so many people who are junior in the practice, and then to be criticizing the industry at large.” He noted that almost two-thirds of the 108 test subjects had five years or less experience in construction. But Coley and his co-authors found that even test subjects with “higher-level qualifications, or having many years of experience in modeling,” were no more accurate than their juniors.

In any case, Mills acknowledged, “the performance gap is real, and we must be aware of models not properly capturing things. We have cases where modelers will come up with a savings measure that is more than the energy use of the house, because they are just working with the model,” and not paying attention to the real house.

That sort of problem — energy models showing unreasonable results — also turns up at the preliminary stage on 50 percent of projects going through the LEED certification process, said Gail Hampsmire of the U.S. Green Building Council. Designers have a tendency to take a “black box” approach, providing whatever inputs a particular energy model requires and then accepting the outputs “without evaluating the reasonability of those results,” she said. “You always have the issue of garbage in/garbage out, and the capability of the modeler to identify whether they are getting garbage out is critical.”

So what’s the fix? The current accreditation requirements for energy modelers are “very gentle,” said Coley, but “when you’re trying to get something off the ground relatively quickly, you can’t send everybody back to college for three years.” In any case, the problem isn’t really education in the formal sense.

“It has to do with feedback,” he said, or the lack of it. The culture of building construction says it’s perfectly reasonable for architects — but not energy modelers —to travel hundreds of miles to see how the actual building compares with what they designed. For energy modelers, there’s not even an expectation that they’ll get on the phone with the building manager at year one and ask how energy usage compares with the original model. As a result, said Coley, energy modeling can become like theoretical physics: “You can very easily create a whole web of theories, and then you find yourself studying the physics of your theories, not the physics of the real world.”

The organization that gives LEED certification is now requiring that developers post actual energy usage on an online data base.

The answer, he suggested, is a regulatory requirement that modelers follow up on their work by routinely checking their predictions against a building’s actual energy consumption. A system of modest inducements could also make that feedback more broadly available — for instance, by promising to take three weeks off the planning permissions process for developers who commit to posting actual energy usage to an online database. The Green Building Council has begun to require that sort of reporting for projects seeking LEED certification, said Hampsmire, with an online platform now in development “for building owners to track their own performance and compare it with other buildings.”

A second problem, according to Coley, is the tendency of government agencies to require simplified energy models at the start of the design process. The requirements often include certain uniform assumptions about energy use, making it easier to compare one building with another. “Because you have to do that at the start, it becomes the default, and this sets up a kind of ‘Alice in Wonderland’ world, and it’s not surprising that modelers model this artificial world.” But at least in the United States that has become less of a problem in recent years, according to Hampsmire. Current building code requirements are “fairly good,” she said. “They don’t say, ‘Model energy use for a building occupied eight hours a day,’” or some other arbitrary standard. Instead, “they specifically state that all energy use has to be modeled as anticipated.”

The takeaway from all this isn’t to discredit energy modeling but to improve it. Builders increasingly need realistic modeling, said Coley, by people with a deep knowledge of building physics and at least as much experience with real buildings as with energy models. Without that, the result will be even more $500-million office blocks with too much glass on the southern exposure, causing everybody inside to bake on a hot summer afternoon. Without smart energy modeling, the result will be a world spinning even faster into out-of-control climate change.

“This isn’t rocket science,” said the Berkeley Laboratory’s Mills. But then he added, “It’s harder than rocket science.”

Richard Conniff is a National Magazine Award-winning writer whose articles have appeared in The New York Times, Smithsonian, The Atlantic, National Geographic, and other publications. His latest book is House of Lost Worlds: Dinosaurs, Dynasties, and the Story of Life on Earth.

Rising Seas May Wipe Out These Jersey Towns, but They're Still Rated AAA

by Christopher Flavelle

‎May 25, 2017

Few parts of the U.S. are as exposed to the threats from climate change as Ocean County, New Jersey. It was here in Seaside Heights that Hurricane Sandy flooded an oceanfront amusement park, leaving an inundated roller coaster as an iconic image of rising sea levels. Scientists say more floods and stronger hurricanes are likely as the planet warms.

Yet last summer, when Ocean County wanted to sell $31 million in bonds maturing over 20 years, neither of its two rating companies, Moody’s Investors Service or S&P Global Ratings, asked any questions about the expected effect of climate change on its finances.

"It didn’t come up, which says to me they’re not concerned about it," says John Bartlett, the Ocean County representative who negotiated with the rating companies. Both gave the bonds a perfect triple-A rating.

The same rating companies that were caught flat-footed by the downturn in the mortgage market during the global financial crisis that ended in 2009 may be underestimating the threat of climate change to coastal communities. If repeated storms and floods are likely to send property values -- and tax revenue -- sinking while spending on sea walls, storm drains or flood-resistant buildings goes up, investors say bond buyers should be warned.

"They are supposed to identify risk to investors," said Eric Glass, a fixed-income portfolio manager at Alliance Bernstein, a New York investment management firm that handles $500 billion in assets. "This is a material risk."

Breckinridge Capital Advisors, a Boston-based firm specializing in fixed-income investments, is already accounting for those risks internally: Last year, it downgraded a borrower in Florida due to climate risk, citing the need for additional capital spending because of future flooding.

Rob Fernandez, its director of environmental, social and governance research, said rating companies should do the same. "Either incorporate these factors, or, if you say that you are, tell us how you’re doing it," he said.

S&P and Moody’s say they’re working on how to incorporate the risk to bonds from severe or unpredictable weather. Moody’s released a report about climate impacts on corporate bond ratings last November and is preparing a similar report on municipal bonds now.

Fitch Ratings Ltd. is more skeptical.

"Some of these disasters, it’s going to sound callous and terrible, but they’re not credit-negative," Amy Laskey, managing director for the local government group at Fitch, said in an interview. "They rebuild, and the new facilities are of higher quality and higher value than the old ones."

For more than a century, rating companies have published information helping investors gauge the likelihood that companies and governments will be able to pay back the money they borrow. Investors use those ratings to decide which bonds to buy and gauge the risk of their portfolio. For most of that time, the determinants of creditworthiness were fairly constant, including revenue, debt levels and financial management. And municipal defaults are rare: Moody’s reports fewer than 100 defaults by municipal borrowers it rated between 1970 and 2014.

Climate change introduces a new risk, especially for coastal cities, as storms and floods increase in frequency and intensity, threatening to destroy property and push out residents. That, in turn, can reduce economic activity and tax revenue. Rising seas exacerbate those threats and pose new ones, as expensive property along the water becomes more costly to protect -- and, in some cases, may get swallowed up by the ocean and disappear from the property-tax rolls entirely.

Just as a shrinking auto industry slowly crippled Detroit, leading to an exodus of residents and, eventually, its bankruptcy in 2013, other cities could face the accumulating risks of storms or floods -- and then suddenly encounter a crisis.

"One of the first questions that we’re going to ask when confronted with an issuer along the coast of Texas, or on the coast of Florida, is: How are you going about addressing, mitigating the impacts of climate change?" Glass, Alliance Bernstein, said. And if local officials don’t have a good answer to that question, he added, "We will not invest, period."

When asked by Bloomberg, none of the big three bond raters could cite an example of climate risk affecting the rating of a city’s bonds.

Kurt Forsgren, a managing director at S&P, said its municipal ratings remain "largely driven by financial performance." He said the company was looking for ways to account for climate change in ratings, including through a city’s ability to access insurance.

Henry Shilling, Moody’s senior vice president for environment, social and governance risks, said the company is planning to issue a report this summer that explains how it will incorporate climate change into its municipal ratings. "It’s a bit of a journey," he said.

Last September, when Hilton Head Island in South Carolina issued bonds that mature over 20 years, Moody’s gave the debt a triple-A rating. In January 2016, all three major bond companies gave triple-A ratings to long-term bonds issued by the city of Virginia Beach, which the U.S. Navy has said faces severe threats from climate change.

The threat isn’t limited to smaller cities. The World Bank called Boston one of the 10 cities globally that are most financially exposed to flooding. But in March, when Boston issued $150 million in bonds maturing over 20 years, Moody’s and S&P each gave those bonds top ratings.

Of course, predictions are hard, especially about the future. While scientists are generally united about the science of climate change, its pace remains uncertain. And what all of that will mean for communities and their likelihood of paying back bonds is not a simple calculation. Ocean County continued to pay back its current debt load after Sandy, and will still have a lot of oceanfront property if its current coast is swamped. The oceanfront just won’t be in the same place.

The storms or floods "might be so severe that it’s going to wipe out the taxation ability," said Bob Buhr, a former vice president at Moody’s who retired last year as a director at Societe Generale SA. "I think this is a real risk."

In May 2016, 117 investors with $19 trillion in assets signed a statement calling for credit ratings to include "systematic and transparent consideration" of environmental and other factors. Signatories also included rating companies from China, the U.S. and elsewhere, including Moody’s and S&P.

Laskey, of Fitch, was skeptical that rating companies could or should account for climate risk in municipal ratings.

"We’re not emergency-preparedness experts," she said in a phone interview. "Unless we see reason to think, ‘Oh, they’re not paying attention,’ we assume that they’re competent, and they’re doing what they need to do in terms of preparedness."

That view is at odds with the picture painted by engineers, safety advocates and insurers. Timothy Reinhold, senior vice president for research at the Insurance Institute for Business & Home Safety, a group funded by insurers, said local officials aren’t doing enough to prepare for the threats of climate change.

"While most coastal communities and cities have weather-related disaster response plans, many older, existing structures within these communities are not as durable, or as resilient as they could and should be," Reinhold wrote in an email.

Politically Fraught

The types of actions that cities could take to reduce their risk -- including tougher building codes, fewer building permits near the coast and buying out the most vulnerable properties -- are politically fraught. And the benefits of those policies are typically years away, long after today’s current leaders will have retired.

The weakness of other incentives leaves the risk of a credit downgrade as one of the most effective prompts available to spur cities to deal with the threat, according to Craig Fugate, director of the Federal Emergency Management Agency under President Barack Obama.

"They need cheap money to finance government," Fugate said in a phone interview. If climate considerations meant higher interest rates, "not only will you have their attention. You’ll actually see changes."

Fugate also said rating companies were wrong to assume that cities are well prepared for climate change, or that their revenue will necessarily recover after a natural disaster.

As an example, he cited the case of Homestead, a city south of Miami that bore the worst damage from Hurricane Andrew in 1992. The city’s largest employer, Homestead Air Force Base, was destroyed in the storm; rather than rebuild it, the federal government decided to include the facility in its base closures.

Fugate said climate change increases the risk that something similar could happen to other places along the coast -- and they won’t ever be able to bounce back as Homestead eventually did.

"If that tax base does not come back," he warned, "they cannot service their debt." Asked about rating companies’ insistence that such risks are remote, Fugate scoffed. "Weren’t these the same people telling us that the subprime mortgage crisis would never happen?"

UM scientists receive grant to investigate heat trapped in Greenland's snow cover

The University of Montana

MISSOULA - A new $1.54 million grant from the National Science Foundation will fund University of Montana geoscientists as they study the deep layer of compacted snow covering most of Greenland's ice sheet.

Lead investigator Joel Harper and co-investigator Toby Meierbachtol, both from UM's Department of Geosciences, received $760,000 for the research at UM and $400,000 for chartering aircraft. A collaborator at the University of Wyoming also received $380,000 for the project.

The research will investigate the development of the snow layer covering Greenland as it melts during the summer months. Meltwater percolates into the underlying snow and refreezes to form deep layers of ice. The melting absorbs heat from the atmosphere, and the refreezing releases the heat into the ice sheet.

The icy snow builds up over years to form a layer of compacted dense snow called firn, which can reach up to 90 meters thick. The firn is a key component of the ice sheet; as its porous structure absorbs meltwater, it changes ice sheet elevation through compaction and influences heat exchanges between the ice sheet and the climate system.

Little is known about the firn layer's structure, temperature or thickness, and two contrasting theories explain how it has developed over time - one suggesting the layer still has the ability to absorb more meltwater and the other suggesting it does not. Research is critical to understanding whether future melt will percolate downward and refreeze in the empty pore space or be routed off the ice sheet, which has implications for sea level rise and heat transfer from the atmosphere to the ice sheet.

The project will quantify the structural and thermal frameworks of Greenland's firn layer by drilling boreholes through it to conduct measurements and experiments deep within the layer.

"The mixture of cold snow with scattered pockets of wet snow makes challenging drilling conditions because the cold drill tends to freeze to the wet snow," Harper said. "That's why there is so little known about this area."

The researchers have designed a new type of drill they hope will solve this problem, and they will use the data collected in the holes to guide computer modeling.

The team will visit the ice sheet six times in three years. A modified DC-3 airplane with skis and powerful engines will deliver the researchers near the top of the ice sheet, and from there they will make two long traverses with their equipment. The project will involve both graduate and undergraduate students.

Tiny shells indicate big changes to global carbon cycle

Future conditions not only stress marine creatures but also may throw off ocean carbon balance

University of California - Davis

Experiments with tiny, shelled organisms in the ocean suggest big changes to the global carbon cycle are underway, according to a study from the University of California, Davis.

For the study, published in the journal Scientific Reports, scientists raised foraminifera -- single-celled organisms about the size of a grain of sand -- at the UC Davis Bodega Marine Laboratory under future, high CO2 conditions.

These tiny organisms, commonly called "forams," are ubiquitous in marine environments and play a key role in food webs and the ocean carbon cycle.

STRESSED UNDER FUTURE CONDITIONS

After exposing them to a range of acidity levels, UC Davis scientists found that under high CO2, or more acidic, conditions, the foraminifera had trouble building their shells and making spines, an important feature of their shells.

They also showed signs of physiological stress, reducing their metabolism and slowing their respiration to undetectable levels.

This is the first study of its kind to show the combined impact of shell building, spine repair, and physiological stress in foraminifera under high CO2 conditions. The study suggests that stressed and impaired foraminifera could indicate a larger scale disruption of carbon cycling in the ocean.

OFF BALANCE

As a marine calcifier, foraminifera use calcium carbonate to build their shells, a process that plays an integral part in balancing the carbon cycle.

Normally, healthy foraminifera calcify their shells and sink to the ocean floor after they die, taking the calcite with them. This moves alkalinity, which helps neutralize acidity, to the seafloor.

When foraminifera calcify less, their ability to neutralize acidity also lessens, making the deep ocean more acidic.

But what happens in the deep ocean doesn't stay in the deep ocean.

IMPACTS FOR THOUSANDS OF YEARS

"It's not out-of-sight, out-of-mind," said lead author Catherine Davis, a Ph.D. student at UC Davis during the study and currently a postdoctoral associate at the University of South Carolina. "That acidified water from the deep will rise again. If we do something that acidifies the deep ocean, that affects atmospheric and ocean carbon dioxide concentrations on time scales of thousands of years."

Davis said the geologic record shows that such imbalances have occurred in the world's oceans before, but only during times of major change.

"This points to one of the longer time-scale effects of anthropogenic climate change that we don't understand yet," Davis said.

UPWELLING BRINGS 'FUTURE' TO SURFACE

One way acidified water returns to the surface is through upwelling, when strong winds periodically push nutrient-rich water from the deep ocean up to the surface. Upwelling supports some of the planet's most productive fisheries and ecosystems. But additional anthropogenic, or human-caused, CO2 in the system is expected to impact fisheries and coastal ecosystems.

UC Davis' Bodega Marine Laboratory in Northern California is near one of the world's most intense coastal upwelling areas. At times, it experiences conditions most of the ocean isn't expected to experience for decades or hundreds of years.

"Seasonal upwelling means that we have an opportunity to study organisms in high CO2, acidic waters today -- a window into how the ocean may look more often in the future," said co-author Tessa Hill, an associate professor in earth and planetary sciences at UC Davis. "We might have expected that a species of foraminifera well-adapted to Northern California wouldn't respond negatively to high CO2 conditions, but that expectation was wrong. This study provides insight into how an important marine calcifier may respond to future conditions, and send ripple effects through food webs and carbon cycling."

The study's other co-authors include Emily Rivest from UC Davis and Virginia Institute of Marine Science, UC Davis professors Brian Gaylord and Eric Sanford, and UC Davis associate research scientist Ann Russell.

The study was supported by the National Science Foundation and the Cushman Foundation Johanna M. Resig Fellowship.

Nagoya University researchers break down plastic waste

Nagoya University-based research team develops new highly efficient catalyst for breaking resistant chemical bonds, paving the way for easier recycling of plastic waste

Nagoya, Japan - What to do proteins and Kevlar have in common? Both feature long chain molecules that are strung together by amide bonds. These strong chemical bonds are also common to many other naturally occurring molecules as well as man-made pharmaceuticals and plastics. Although amide bonds can give great strength to plastics, when it comes to their recycling at a later point, the difficultly of breaking these bonds usually prevents recovery of useful products. Catalysts are widely used in chemistry to help speed up reactions, but breaking the kinds of amide bonds in plastics, such as nylon, and other materials requires harsh conditions and large amounts of energy.

Building on their previous work, a research team at Nagoya University recently developed a series of organometallic ruthenium catalysts to break down even the toughest amide bonds effectively under mild conditions.

"Our previous catalysts could hydrogenate most amide bonds, but the reactions needed a long time at high temperature and high pressure. This new ruthenium catalyst can hydrogenate difficult substrates under much milder conditions," says lead author Takashi Miura.

Hydrogenation is the key step leading to breakdown of amide bonds. The catalyst features a ruthenium atom supported in an organic framework. This ruthenium atom can adsorb hydrogen and deliver it to the amide bond to initiate the breakdown. The team probed the position of hydrogen on the catalyst in the reaction pathway and modified the shape of the supporting framework. By making sure that the hydrogen molecule was is the best possible position for interaction with amide bonds, the team achieved much more effective hydrogenation.

Group leader Susumu Saito says, "The changes we made to the catalyst allowed some tricky amide bonds to be selectively cleaved for the first time. This catalyst has great potential for making designer peptides for pharmaceutics and could also be used to recover materials from waste plastics to help realize an anthropogenic chemical carbon cycle."

The article, "Multifaceted catalytic hydrogenation of amides via diverse activation of a sterically confined bipyridine-ruthenium framework" was published in Scientific Reports at DOI: 10.1038/s41598-017-01645-z

Losing sleep over climate change

University of California - San Diego

Climate change may keep you awake -- and not just metaphorically. Nights that are warmer than normal can harm human sleep, researchers show in a new paper, with the poor and elderly most affected. According to their findings, if climate change is not addressed, temperatures in 2050 could cost people in the United States millions of additional nights of insufficient sleep per year. By 2099, the figure could rise by several hundred million more nights of lost sleep annually.

The study was led by Nick Obradovich, who conducted much of the research as a doctoral student in political science at the University of California San Diego. He was inspired to investigate the question by the heat wave that hit San Diego in October of 2015. Obradovich was having trouble sleeping. He tossed and he turned, the window AC in his North Park home providing little relief from the record-breaking temperatures. At school, he noticed that fellow students were also looking grumpy and bedraggled, and it got him thinking: Had anyone looked at what climate change might do to sleep?

Published by Science Advances, the research represents the largest real-world study to date to find a relationship between reports of insufficient sleep and unusually warm nighttime temperatures. It is the first to apply the discovered relationship to projected climate change.

"Sleep has been well-established by other researchers as a critical component of human health. Too little sleep can make a person more susceptible to disease and chronic illness, and it can harm psychological well-being and cognitive functioning," Obradovich said. "What our study shows is not only that ambient temperature can play a role in disrupting sleep but also that climate change might make the situation worse by driving up rates of sleep loss."

Obradovich is now a postdoctoral fellow at Harvard's Kennedy School of Government and a research scientist at the MIT Media Lab. He is also a fellow of the Center for Marine Biodiversity and Conservation at UC San Diego's Scripps Institution of Oceanography. Obradovich worked on the study with Robyn Migliorini, a student in the San Diego State University/UC San Diego Joint Doctoral Program in Clinical Psychology, and sleep researcher Sara Mednick of UC Riverside. Obradovich's dissertation advisor, social scientist James Fowler of UC San Diego, is also a co-author.

The study starts with data from 765,000 U.S. residents between 2002 and 2011 who responded to a public health survey, the Behavioral Risk Factor Surveillance Survey from the Centers for Disease Control and Prevention. The study then links data on self-reported nights of insufficient sleep to daily temperature data from the National Centers for Environmental Information. Finally, it combines the effects of unusually warm temperatures on sleep with climate model projections.

The main finding is that anomalous increases in nighttime temperature by 1 degree Celsius translate to three nights of insufficient sleep per 100 individuals per month. To put that in perspective: If we had a single month of nightly temperatures averaging 1 degree Celsius higher than normal, that is equivalent to 9 million more nights of insufficient sleep in a month across the population of the United States today, or 110 million extra nights of insufficient sleep annually.

The negative effect of warmer nights is most acute in summer, the research shows. It is almost three times as high in summer as during any other season.

The effect is also not spread evenly across all demographic groups. Those whose income is below $50,000 and those who are aged 65 and older are affected most severely. For older people, the effect is twice that of younger adults. And for the lower-income group, it is three times worse than for people who are better off financially.

Using climate projections for 2050 and 2099 by NASA Earth Exchange, the study paints a bleak picture of the future if the relationship between warmer nights and disrupted sleep persists. Warmer temperatures could cause six additional nights of insufficient sleep per 100 individuals by 2050 and approximately 14 extra nights per 100 by 2099.

"The U.S. is relatively temperate and, in global terms, quite prosperous," Obradovich said. "We don't have sleep data from around the world, but assuming the pattern is similar, one can imagine that in places that are warmer or poorer or both, what we'd find could be even worse."

The research was supported in part by the National Science Foundation, grants no. DGE0707423 and TG-SES130013 to Obradovich, DGE1247398 to Migliorini, and BCS1439210 to Mednick. Mednick is also funded by the National Institute on Aging (R01AG046646) and the Department of Defense (Office of Naval Research Young Investigator Award).

As a student, Obradovich was a Gramercy Fellow in the UC San Diego Division of Social Sciences.

The authors also acknowledge the assistance of the San Diego Supercomputer Center and the Human Nature Group at UC San Diego.



Archives:

Click here to view our environmental news archive
The Utica Rome Green ExpoTM  charitable event presented by:Energy Users Consulting Services
© all rights reserved