Newly identified continent Zealandia faces a battle for recognition

Lurking beneath New Zealand is a long-hidden continent called Zealandia, geologists say. But since nobody is in charge of officially designating a new continent, individual scientists will ultimately have to judge for themselves.

A team of geologists pitches the scientific case for the new continent in the March/April issue of GSA Today, arguing that Zealandia is a continuous expanse of continental crust covering around 4.9 million square kilometers. That’s about the size of the Indian subcontinent. Unlike the other mostly dry continents, around 94 percent of Zealandia hides beneath the ocean. Only New Zealand, New Caledonia and a few small islands peek above the waves.
“If we could pull the plug on the world’s oceans, it would be quite clear that Zealandia stands out about 3,000 meters above the surrounding ocean crust,” says study coauthor Nick Mortimer, a geologist at GNS Science in Dunedin, New Zealand. “If it wasn’t for the ocean level, long ago we’d have recognized Zealandia for what it was — a continent.”

The landmass faces an uphill battle for continent status, though. Unlike planets and slices of geologic time (SN: 10/15/16, p. 14), no international panel exists to officially rubber-stamp a new continent. The current number of continents is already vague — usually given as six or seven, with geologists referring to Europe and Asia collectively as Eurasia. Proponents will just have to start using the term “Zealandia” and hope it catches on, Mortimer says.
This odd path forward stems from the simple fact that nobody expected another addition to the continental ranks, says Keith Klepeis, a structural geologist at the University of Vermont in Burlington who supports Zealandia’s inclusion. The discovery illustrates that “the large and obvious can be overlooked in science,” he says.

Mortimer and others have been building a case for Zealandia for more than a decade and say they’ve now ticked off the boxes required to meet common definitions of a continent. The region is composed of continental rocks such as granite, for instance, unlike the denser volcanic basalt that forms ocean crust. Zealandia is also spatially distinct from nearby Australia thanks to an intervening stretch of ocean crust.

“If Zealandia was physically attached to Australia, then the big news story here wouldn’t be that there’s a new continent on planet Earth; it’d be that the Australian continent is 4.9 million square kilometers larger,” Mortimer says. Other geologic features rising from the seafloor either are not made of continental crust, such as volcano-built submarine plateaus, or are not distinct from nearby continents, such as Greenland.

Size is a sticking point, though. No minimum size requirement exists for continents. Mortimer and colleagues propose a 1-million-square-kilometer cutoff point. If this limit is accepted, Zealandia would be the scrawniest continent by far, little more than three-fifths the size of Australia. (Both submerged and dry areas contribute to a continent’s overall size.)
Scientists dub smaller fragments of continental crust microcontinents, and microcontinents that are attached to larger continents are subcontinents. About six times the size of Madagascar, one of the larger microcontinents, Zealandia fits better as a continent than a microcontinent, Mortimer and colleagues conclude.
“Zealandia’s in this sort of gray zone,” says Richard Ernst, a geologist at Carleton University in Ottawa. He proposes that an intermediate term could help bridge the gap between microcontinent and full-blown continent: mini-continent. The definition would cover Zealandia as well as other not-quite-continents such as India before it plowed into Eurasia tens of millions of years ago. Such a solution would be similar to the route taken for Pluto, which was demoted from planet to the newly coined “dwarf planet” in 2006.

Scientists previously assumed that New Zealand and its neighbors were an assortment of islands, fragments of long-gone continents and other geologic odds and ends. Recognizing Zealandia as a coherent continent would help scientists piece together ancient supercontinents and study how geologic forces reshape landmasses over time, Mortimer says.

Zealandia probably began as part of the southeastern edge of the supercontinent Gondwana, making up about 5 percent of that supersized landmass, before it began peeling off around 100 million years ago. This breakup stretched, thinned and distorted Zealandia, which ultimately lowered the region below sea level.

Nudging people to make good choices can backfire

Nudges are a growth industry. Inspired by a popular line of psychological research and introduced in a best-selling book a decade ago, these inexpensive behavior changers are currently on a roll.

Policy makers throughout the world, guided by behavioral scientists, are devising ways to steer people toward decisions deemed to be in their best interests. These simple interventions don’t force, teach or openly encourage anyone to do anything. Instead, they nudge, exploiting for good — at least from the policy makers’ perspective — mental tendencies that can sometimes lead us astray.

But new research suggests that low-cost nudges aimed at helping the masses have drawbacks. Even simple interventions that work at first can lead to unintended complications, creating headaches for nudgers and nudgees alike.

Nudge proponents, an influential group of psychologists and economists known as behavioral economists, follow a philosophy they dub libertarian paternalism. This seemingly contradictory phrase refers to a paternalistic desire to promote certain decisions via tactics that preserve each person’s freedom of choice. Self-designated “choice architects” design nudges to protect us from inclinations that might not serve us well, such as overconfidence, limited attention, a focus on now rather than later, the tendency to be more motivated by losses than gains and intuitive flights of fancy.

University of Chicago economist Richard Thaler and law professor Cass Sunstein, now at Harvard University, triggered this policy movement with their 2008 book Nudge. Thaler and Sunstein argued that people think less like an economist’s vision of a coldly rational, self-advancing Homo economicus than like TV’s bumbling, doughnut-obsessed Homer Simpson.
Choice architects like to prod with e-mail messages, for example, reminding a charity’s past donors that it’s time to give or telling tardy taxpayers that most of their neighbors or business peers have paid on time. To nudge healthier eating, these architects redesign cafeterias so that fruits and vegetables are easier to reach than junk food.

A popular nudge tactic consists of automatically enrolling people in organ-donation programs and retirement savings plans while allowing them to opt out if they want. Until recently, default choices for such programs left people out unless they took steps to join up. For organ donation, the nudge makes a difference: Rates of participation typically exceed 90 percent of adults in countries with opt-out policies and often fall below 15 percent in opt-in countries, which require explicit consent.

Promising results of dozens of nudge initiatives appear in two government reports issued last September. One came from the White House, which released the second annual report of its Social and Behavioral Sciences Team. The other came from the United Kingdom’s Behavioural Insights Team. Created by the British government in 2010, the U.K. group is often referred to as the Nudge Unit.

In a September 20, 2016, Bloomberg View column, Sunstein said the new reports show that nudges work, but often increase by only a few percentage points the number of people who, say, receive government benefits or comply with tax laws. He called on choice architects to tackle bigger challenges, such as finding ways to nudge people out of poverty or into higher education.

Missing from Sunstein’s comments and from the government reports, however, was any mention of a growing conviction among some researchers that well-intentioned nudges can have negative as well as positive effects. Accepting automatic enrollment in a company’s savings plan, for example, can later lead to regret among people who change jobs frequently or who realize too late that a default savings rate was set too low for their retirement needs. E-mail reminders to donate to a charity may work at first, but annoy recipients into unsubscribing from the donor list.

“I don’t want to get rid of nudges, but we’ve been a bit too optimistic in applying them to public policy,” says behavioral economist Mette Trier Damgaard of Aarhus University in Denmark.

Nudges, like medications for physical ailments, require careful evaluation of intended and unintended effects before being approved, she says. Policy makers need to know when and with whom an intervention works well enough to justify its side effects.

Default downer
That warning rings especially true for what is considered a shining star in the nudge universe — automatic enrollment of employees in retirement savings plans. The plans, called defaults, take effect unless workers decline to participate.

No one disputes that defaults raise participation rates in retirement programs compared with traditional plans that require employees to sign up on their own. But the power of opt-out plans to kick-start saving for retirement stayed under the radar until it was reported in the November 2001 Quarterly Journal of Economics.

When the company in the 2001 study — a health and financial services firm with more than 10,000 employees — switched from voluntary to automatic enrollment in a retirement savings account, employee participation rose from about 37 percent to nearly 86 percent.

Similar findings over the next few years led to passage of the U.S. Pension Protection Act of 2006, which encouraged employers to adopt automatic pension enrollment plans with increasing savings contributions over time.

But little is known about whether automatic enrollees are better or worse off as time passes and their personal situations change, says Harvard behavioral economist Brigitte Madrian. She coauthored the 2001 paper on the power of default savings plans.

Although automatic plans increase savings for those who otherwise would have squirreled away little or nothing, others may lose money because they would have contributed more to a self-directed retirement account, Madrian says. In some cases, having an automatic savings account may encourage irresponsible spending or early withdrawals of retirement money (with penalties) to cover debts. Such possibilities are plausible but have gone unstudied.

In line with Madrian’s concerns, mathematical models developed by finance professor Bruce Carlin of the University of California, Los Angeles and colleagues suggest that people who default into retirement plans learn less about money matters, and share less financial information with family and friends, than those who join plans that require active investment choices.

Opt-out savings programs “have been oversimplified to the public and are being sold as a great way to change behavior without addressing their complexities,” Madrian says. Research needs to address how well these plans mesh with individuals’ personalities and decision-making styles, she recommends.
Delay and regret
By comparing procrastinators with more decisive folks in one large retirement system, economist Jeffrey Brown examined how individual differences influence whether people join and stay happy with opt-out savings programs. Procrastinators were not only more likely to end up in a default plan but also more apt to regret that turn of events down the road, says Brown, of the University of Illinois at Urbana-Champaign.

Among state employees at the university who were offered any of three retirement plans, those who delayed making decisions were particularly likely to belong to a default plan and to want to switch to another plan, Brown and colleagues reported in September 2016 in the Journal of Financial Economics. These plans serve as a substitute for Social Security and often represent an employee’s largest financial asset. The default plan is generous toward those who stay long enough to retire from the state system but less so to those who leave early. A second plan allows for a larger cash refund upon leaving the system early. A third plan enables savers to direct contributions to any of a variety of investments. Being dumped into the default plan isn’t always the best option, especially because initial plan choices are permanent.

More than 6,000 employees who joined the retirement system in or after 1999 completed e-mail questionnaires in 2012. When asked what they would do if they could go back and redo their savings choice, 17 percent of defaulters reported a strong desire to change plans. Only about 7 percent of those who actively selected a plan and 8 percent of those who intentionally chose the default wanted to change.

The likelihood of having been assigned to the default plan and wanting to switch to another plan increased steadily as employees reported higher levels of procrastination. Implications of this finding are not entirely clear, Madrian says. Individuals in the default savings plan either by choice or procrastination may, for instance, regret lots of events in their lives. If so, they can’t easily be compared with less regretful folks who chose another plan.

Requiring people to make an active choice of a retirement plan, even if they’re procrastinators, might reduce regret down the road, Madrian suspects. But given a complex, high-stakes choice — such as that faced by Illinois university employees — “it may still make sense to set a default option even if some individuals who end up in the default will regret it later.”

Researchers need to determine how defaults and other nudges instigate behavior changes before unleashing them on the public, says philosopher of science Till Grüne-Yanoff of the Royal Institute of Technology in Stockholm.
Hidden costs
Sometimes well-intentioned, up-front attempts to get people to do what seems right come back to bite nudgers on the bottom line.

Consider e-mail prompts and reminders. Although nudges were originally conceived to encourage people to accept an option unthinkingly, simple attempts to curb forgetfulness and explain procedures now get folded into the nudge repertoire. Short-term success stories abound for these inexpensive messages. The 2016 report of the U.S. Social and Behavioral Sciences Team cites a case in which e-mails sent by the Department of Education to student-loan recipients, which described how to apply for a federal repayment plan, led 6,000 additional borrowers to sign up for the plan in the following three months, relative to borrowers who did not receive the explanatory e-mail. Messages were tailored to borrowers’ circumstances, such as whether they previously expressed interest in the payback plan or had stopped making loan repayments.

The U.K. Behavioural Insights Team — now a global company with offices in Britain, North America, Australia and Singapore — also sees value in short, informational nudges.

One of the company’s projects produced an unexpected twist. Low-income New Orleans residents who hadn’t seen a primary care physician in more than two years — 21,442 of them — received one of three text messages to set up a free medical appointment. Telling people that they had been selected for a free appointment worked best, leading 1.4 percent of recipients to sign up, versus 1 percent of those who got an information-only text. But a text asking people to “take care of yourself so you can take care of the ones you love” backfired, resulting in only 0.7 percent of recipients making appointments. Uptake for all three groups was low, but the study suggested that nudges that unwittingly trigger bad feelings (guilt or shame) can easily go awry, Aarhus University’s Damgaard says.
A case in point is a study submitted for publication by Damgaard and behavioral economist Christina Gravert of the University of Gothenburg in Sweden. E-mailed donation reminders sent to people who had contributed to a Danish anti-poverty charity increased the number of donations in the short term, but also triggered an upturn in the number of people unsubscribing from the list.

People’s annoyance at receiving reminders perceived as too frequent or pushy cost the charity money over the long haul, Damgaard holds. Losses of list subscribers more than offset the financial gains from the temporary uptick in donations, she and Gravert conclude.

“Researchers have tended to overlook the hidden costs of nudging,” Damgaard says.

In one experiment, more than 17,000 previous donors to a Danish charity received an e-mail asking them to donate within 10 days. About half received an additional reminder one week later. Reminders yielded 46 donations, versus 30 donations from people sent only one e-mail. But over the next month, 318 reminded donors unsubscribed from the e-mail list, as opposed to 186 of those who received one e-mail. To Damgaard and Gravert, reminders were money losers — especially if sent more than once.

A second experiment examined more than 43,000 Danish charity donors split into three groups. The number of unsubscribers reached 71 among those sent an e-mail informing them that digital reminders would be sent every month. Among those receiving the same e-mail plus an announcement that only one reminder would be sent in the next three months, 44 people abandoned the mailing list. That’s what a digital sigh of relief looks like. An e-mail that combined a notice of monthly reminders with a promise of a donation from an anonymous sponsor for every mailing list donation slightly lowered annoyance at the prospect of monthly reminders — 52 unsubscribed.

The limits of nudge
There are at least two ways to think about unintended drawbacks to nudges. Behavioral economists including Damgaard take an optimistic stance. They see value in determining how nudges work over the long haul, for better and worse. In that way, researchers can target people most likely to benefit from specific nudges. Few schemes to change behavior, including nudges, alter people’s lives for the better in a big and lasting way, cautions Harvard behavioral economist and nudge proponent Todd Rogers. “One of the most important questions in behavioral science right now is, how do we induce persistent behavior change?”

But those already critical of libertarian paternalism say that new findings back up their pessimistic view of what nudges can do. When past charity donors in Denmark fork over more money in response to an e-mail reminder and then bolt from the mailing list, as reported by Damgaard and Gravert, they’re demonstrating that even a small-scale nudge can trigger resistance, says political scientist Frank Mols of the University of Queensland in Brisbane, Australia. “It verges on ridiculous to claim that nudges can change attitudes or behavior related to huge social problems, such as crime or climate change,” he says.

Nudges wrongly assume that each person makes decisions in isolation, Mols contends. People belong to various groups that frame the way they make sense of the world, he says. Rather than nudging, lasting behavior change entails persuasion techniques long exploited by advertisers: altering how people view their social identities. Coors beer, for instance, has long been marketed to small-town folks and city dwellers alike as the choice of rugged, outdoorsy individualists.
In the noncommercial realm, Mols points to a successful 2006 campaign to reduce water use in Queensland during a severe drought. Average per capita water use dropped substantially and stayed lower after the drought broke in 2009. That’s because the campaign included advertisements targeting citizens’ view of themselves as “Queenslanders,” he says. A good Queenslander became redefined as a “water-wise” person who consumed as little of the resource as possible.

Queensland’s persuasive approach to water conservation avoided ethical concerns that dog nudges, Mols adds. Choice architects’ conviction that people possess biased minds in need of expert guidance to achieve good lives cuts off debate about what constitutes a good life, he argues.

Elspeth Kirkman, a policy implementation specialist who heads the U.K. Behavioural Insights Team’s North American office in New York City, sees no ethical problem with nudges that people can reject anytime they want. But she acknowledges that ethical gray areas exist. “It’s not always clear when an intervention is a nudge and when it’s coercive manipulation,” she says. Nudge carefully and monitor an intervention’s intended and unintended effects for as long as possible, Kirkman advises.

Even amid calls for caution, nudges are expanding their reach. With input from the Behavioural Insights Team, a U.K. law passed in March 2016 and slated to take effect in April 2018 imposes a soft drink tax that rises with increasing sugar content. The law aims to encourage soft drink companies to switch from high-sugar products to artificially sweetened and low-sugar beverages in an effort to reduce obesity. The U.K. soft drink firm Lucozade Ribena Suntory and the retail company Tesco announced last November plans to cut sugar in soft drinks by at least 50 percent to escape the looming tax.

The law might prod consumers to change too, if companies stand their ground but raise prices of high-sugar drinks due to the new tax, Kirkman predicts.

In nudges as in life, though, the best-laid plans can tank. Perhaps scientists will discover serious health risks in artificial sweeteners currently considered safe, reviving soda makers’ sugar dependence. Maybe a black market for old-school soda will pop up in Britain, sending soft drink lovers to back-alley Coke dealers for sugar fixes.

The law of unintended consequences is always taxing.
This article appears in the March 18, 2017, issue of Science News with the headline, “Nudge Backlash: Steering people’s decisions with simple tactics can come with a downside.”

Detachable scales turn this gecko into an escape artist

Large, detachable scales make a newly discovered species of gecko a tough catch. When a predator grabs hold, Madagascar’s Geckolepis megalepis strips down and slips away, looking more like slimy pink Silly Putty than a rugged lizard.

All species of Geckolepis geckos have tear-off scales that regrow within a few weeks, but G. megalepis boasts the largest. Some of its scales reach nearly 6 millimeters long. Mark Scherz, a herpetologist and taxonomist at Ludwig Maximilian University of Munich, and colleagues describe the new species February 7 in PeerJ.

The hardness and density of the oversized scales may help the gecko to escape being dinner, Scherz says. Attacking animals probably get their claws or teeth stuck on the scales while G. megalepis contracts its muscles, loosening the connection between the scales and the translucent tissue underneath. The predator is left with a mouthful of armor, but no meat. “It’s almost ridiculous,” Scherz says, “how easy it is for these geckos to lose their scales.”

In 1967, LSD was briefly labeled a breaker of chromosomes

Two New York researchers have found the hallucinogenic drug will markedly increase the rate of abnormal change in chromosomes. [Scientists] tested LSD on cell cultures from the blood of two healthy individuals … [and] also found similar abnormal changes in the blood of a schizophrenic patient who had been treated with [LSD]. The cell cultures showed a two-fold increase in chromosomal breaks over the normal rate. — Science News, April 1, 1967

Update
Psychedelic-era reports that LSD damages chromosomes got lots of press but fell apart within a few years. A review in Science in 1971 concluded that ingesting moderate doses of LSD caused no detectable genetic damage. Researchers are still trying to figure out the molecular workings of the drug. Recent evidence suggests that the substance gets trapped in a pocket of the receptor for serotonin, a key chemical messenger in the brain. Its prolonged stay may explain why LSD trips can last up to a day or more (SN: 3/4/17, p. 16).

Neandertals had an eye for patterns

Neandertals knew how to kick it up a couple of notches. Between 38,000 and 43,000 years ago, these close evolutionary relatives of humans added two notches to five previous incisions on a raven bone to produce an evenly spaced sequence, researchers say.

This visually consistent pattern suggests Neandertals either had an eye for pleasing-looking displays or saw some deeper symbolic meaning in the notch sequence, archaeologist Ana Majkić of the University of Bordeaux, France, and her colleagues report March 29 in PLOS ONE.

Notches added to the bone, unearthed in 2005 at a Crimean rock shelter that previously yielded Neandertal bones, were shallower and more quickly dashed off than the original five notches. But additions were carefully placed, resulting in relatively equal spacing of all notches.

Although bone notches may have had a practical use, such as fixing thread on an eyeless needle, the even spacing suggests Neandertals had a deeper meaning in mind — or at least knew what looked good.

Previous discoveries suggest Neandertals made eagle-claw necklaces and other personal ornaments, possibly for use in rituals (SN: 4/18/15, p. 7).

Event Horizon Telescope to try to capture images of elusive black hole edge

The Milky Way’s black hole may finally get its close-up.

Beginning on April 5, scientists with the Event Horizon Telescope will attempt to zoom in on a never-before-imaged realm: a black hole’s event horizon. That’s the boundary at which gravity’s pull becomes so strong that nothing can escape.

In the telescope’s cross hairs are two supermassive black holes, one at the center of the Milky Way, the other in the nearby galaxy M87. Scientists hope to capture the light emitted by a halo of gas that swirls just outside the event horizon as the black hole swallows it up.

The Event Horizon Telescope is not one telescope, but eight radio observatories linked together into a massive network that spans the globe. The new observations will be the first that include the ultrasensitive Atacama Large Millimeter/submillimeter Array in Chile’s Atacama Desert, increasing the possibility that the image will reveal new details. Astronomers will take data for five nights within a 10-day period.

This is no Polaroid picture, though — it will be months before the data have been crunched and the portrait is ready for prime time.

Einstein’s latest anniversary marks the birth of modern cosmology

First of two parts

Sometimes it seems like every year offers an occasion to celebrate some sort of Einstein anniversary.

In 2015, everybody lauded the 100th anniversary of his general theory of relativity. Last year, scientists celebrated the centennial of his prediction of gravitational waves — by reporting the discovery of gravitational waves. And this year marks the centennial of Einstein’s paper establishing the birth of modern cosmology.

Before Einstein, cosmology was not very modern at all. Most scientists shunned it. It was regarded as a matter for philosophers or possibly theologians. You could do cosmology without even knowing any math.

But Einstein showed how the math of general relativity could be applied to the task of describing the cosmos. His theory offered a way to study cosmology precisely, with a firm physical and mathematical basis. Einstein provided the recipe for transforming cosmology from speculation to a field of scientific study.

“There is little doubt that Einstein’s 1917 paper … set the foundations of modern theoretical cosmology,” Irish physicist Cormac O’Raifeartaigh and colleagues write in a new analysis of that paper.

Einstein had pondered the implications of his new theory for cosmology even before he had finished it. General relativity was, after all, a theory of space and time — all of it. Einstein’s showed that gravity — the driving force sculpting the cosmic architecture — was simply the distortion of spacetime geometry generated by the presence of mass and energy. (He constructed an equation to show how spacetime geometry, on the left side of the equation, was determined by the density of mass-energy, the right side.) Since spacetime and mass-energy account for basically everything, the entire cosmos ought to behave as general relativity’s equation required.

Newton’s law of gravity had posed problems in that regard. If every mass attracted every other mass, as Newton had proclaimed, then all the matter in the universe ought to have just collapsed itself into one big blob. Newton suggested that the universe was infinite, filled with matter, so that attraction inward was balanced by the attraction of matter farther out. Nobody really bought that explanation, though. For one thing, it required a really precise arrangement: One star out of place, and the balance of attractions disappears and the universe collapses. It also required an infinity of stars, making it impossible to explain why it’s dark at night. (There would be a star out there along every line of sight at all times.)

Einstein hoped his theory of gravity would resolve the cosmic paradoxes of Newtonian gravity. So in early 1917, less than a year after his complete paper on the general theory was published, he delivered a short paper to the Prussian Academy of Sciences outlining the implications of his theory for cosmology.
In that paper, titled “Cosmological Considerations in the General Theory of Relativity,” he started by noting the problems posed by using Newton’s gravity to describe the universe. Einstein showed that Newton’s gravity would require a finite island of stars sitting in an infinite space. But over time such a collection of stars would evaporate. That problem could be avoided, though, if the universe turned out not to be infinite. Instead, Einstein said, everything would be fine if the universe is finite. Big, sure, but curved in such a way that it closed on itself, like a sphere.

Einstein’s mathematical challenge was to show that such a finite cosmic spacetime would be static and stable. (In those days nobody knew that the universe was expanding.) He assumed that on a large enough scale, the distribution of matter in this universe could be considered uniform. (Einstein said it was like viewing the Earth as a smooth sphere for most purposes, even though its terrain is full of complexities on smaller distance scales.) Matter’s effect on spacetime curvature would therefore be pretty much constant, and the universe’s overall condition would be unchanging.

All this made sense to Einstein because he had a limited view of what was actually going on in the cosmos. Like many scientists in those days, he believed the universe was basically just the Milky Way galaxy. All the known stars moved fairly slowly, consistent with his belief in a spherical cosmos with uniformly distributed mass. Unfortunately, general relativity’s math didn’t work if that was the case — it suggested the universe would not be stable. Einstein realized, though, that his view of the static spherical universe would succeed if he added a term to his original equation.

In fact, there were good reasons to include the term anyway. O’Raifeartaigh and colleagues point out that in his earlier work on general relativity, Einstein remarked in a footnote that his equation technically permitted the inclusion of an additional term. That didn’t seem to matter at the time. But in his cosmology paper, Einstein found that it was just the thing his equation needed to describe the universe properly (as Einstein then supposed the universe to be). So he added that factor, designated by the Greek letter lambda, to the left-hand side of his basic general relativity equation.

“That term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small velocities of the stars,” Einstein wrote in his 1917 paper. As long as the magnitude of this new term on the geometry side of the equation was small enough, it would not alter the theory’s predictions for planetary motions in the solar system.

Einstein’s 1917 paper demonstrated the mathematical effectiveness of lambda (also called the “cosmological constant”) but did not say much about its physical interpretation. In another paper, published in 1918, he commented that lambda represented a negative mass density — it played “the role of gravitating negative masses which are distributed all over the interstellar space.” Negative mass would counter the attractive gravity and prevent all the matter in Einstein’s spherical finite universe from collapsing.

As everybody now knows, though, there is no danger of collapse, because the universe is not static to begin with, but rather is rapidly expanding. After Edwin Hubble had established such expansion, Einstein abandoned lambda as unnecessary (or at least, set it equal to zero in his equation). Others built on Einstein’s foundation to derive the math needed to make sense of Hubble’s discovery, eventually leading to the modern view of an expanding universe initiated by a Big Bang explosion.

But in the 1990s, astronomers discovered that the universe is not only expanding, it is expanding at an accelerating rate. Such acceleration requires a mysterious driving force, nicknamed “dark energy,” exerting negative pressure in space. Many experts believe Einstein’s cosmological constant, now interpreted as a constant amount of energy with negative pressure infusing all of space, is the dark energy’s true identity.

Einstein might not have been surprised by all of this. He realized that only time would tell whether his lambda would vanish to zero or play a role in the motions of the heavens. As he wrote in 1917 to the Dutch physicist-astronomer Willem de Sitter: “One day, our actual knowledge of the composition of the fixed-star sky, the apparent motions of fixed stars, and the position of spectral lines as a function of distance, will probably have come far enough for us to be able to decide empirically the question of whether or not lambda vanishes.”

Hawk moths convert nectar into antioxidants

Hawk moths have a sweet solution to muscle damage.

Manduca sexta moths dine solely on nectar, but the sugary liquid does more than fuel their bodies. The insects convert some of the sugars into antioxidants that protect the moths’ hardworking muscles, researchers report in the Feb. 17 Science.

When animals expend a lot of energy, like hawk moths do as they rapidly beat their wings to hover at a flower, their bodies produce reactive molecules, which attack muscle and other cells. Humans and other animals eat foods that contain antioxidants that neutralize the harmful molecules. But the moths’ singular food source — nectar — has little to no antioxidants.

So the insects make their own. They send some of the nectar sugars through an alternative metabolic pathway to make antioxidants instead of energy, says study coauthor Eran Levin, an entomologist now at Tel Aviv University. Levin and colleagues say this mechanism may have allowed nectar-loving animals to evolve into powerful, energy-intensive fliers.

Immune cells play surprising role in steady heartbeat

Immune system cells may help your heart keep the beat. These cells, called macrophages, usually protect the body from invading pathogens. But a new study published April 20 in Cell shows that in mice, the immune cells help electricity flow between muscle cells to keep the organ pumping.

Macrophages squeeze in between heart muscle cells, called cardiomyocytes. These muscle cells rhythmically contract in response to electrical signals, pumping blood through the heart. By “plugging in” to the cardiomyocytes, macrophages help the heart cells receive the signals and stay on beat.
Researchers have known for a couple of years that macrophages live in healthy heart tissue. But their specific functions “were still very much a mystery,” says Edward Thorp, an immunologist at Northwestern University’s Feinberg School of Medicine in Chicago. He calls the study’s conclusion that macrophages electrically couple with cardiomyocytes “paradigm shifting.” It highlights “the functional diversity and physiologic importance of macrophages, beyond their role in host defense,” Thorp says.

Matthias Nahrendorf, a cell biologist at Harvard Medical School, stumbled onto this electrifying find by accident.

Curious about how macrophages impact the heart, he tried to perform a cardiac MRI on a mouse genetically engineered to not have the immune cells. But the rodent’s heartbeat was too slow and irregular to perform the scan.
These symptoms pointed to a problem in the mouse’s atrioventricular node, a bundle of muscle fibers that electrically connects the upper and lower chambers of the heart. Humans with AV node irregularities may need a pacemaker to keep their heart beating in time. In healthy mice, researchers discovered macrophages concentrated in the AV node, but what the cells were doing there was unknown.
Isolating a heart macrophage and testing it for electrical activity didn’t solve the mystery. But when the researchers coupled a macrophage with a cardiomyocyte, the two cells began communicating electrically. That’s important, because the heart muscle cells contract thanks to electrical signals.

Cardiomyocytes have an imbalance of ions. While in the resting state, there are more positive ions outside the cell than inside, but when a cardiomyocyte receives an electrical signal from a neighboring heart cell, that distribution switches. This momentary change causes the cell to contract and send the signal on to the next cardiomyocyte.

Scientists previously thought that cardiomyocytes were capable of this electrical shift, called depolarization, on their own. But Nahrendorf and his team found that macrophages aid in the process. Using a protein, a macrophage hooks up to a cardiomyocyte. This protein directly connects the inside of these cells to each other, allowing macrophages to transfer positive charges, giving cardiomyocytes a boost kind of like with a jumper cable. This makes it easier for the heart cells to depolarize and trigger the heart contraction, Nahrendorf says.

“With the help of the macrophages, the conduction system becomes more reliable, and it is able to conduct faster,” he says.

Nahrendorf and colleagues found macrophages within the AV node in human hearts as well but don’t know if the cells play the same role in people. The next step is to confirm that role and explore whether or not the immune cells could be behind heart problems like arrhythmia, says Nahrendorf.

Long naps lead to less night sleep for toddlers

Like most moms and dads, my time in the post-baby throes of sleep deprivation is a hazy memory. But I do remember feeling instant rage upon hearing a popular piece of advice for how to get my little one some shut-eye: “sleep begets sleep.” The rule’s reasoning is unassailable: To get some sleep, my baby just had to get some sleep. Oh. So helpful. Thank you, lady in the post office and entire Internet.

So I admit to feeling some satisfaction when I came across a study that found an exception to the “sleep begets sleep” rule. The study quite reasonably suggests there is a finite amount of sleep to be had, at least for the 50 Japanese 19-month-olds tracked by researchers.

The researchers used activity monitors to record a week’s worth of babies’ daytime naps, nighttime sleep and activity patterns. The results, published June 9, 2016, in Scientific Reports, showed a trade-off between naps and night sleep. Naps came at the expense of night sleep: The longer the nap, the shorter the night sleep, the researchers found. And naps that stretched late into the afternoon seemed to push back bedtime.

In this study, naps didn’t affect the total amount of sleep each child got. Instead, the distribution of sleep across day and night changed. That means you probably can’t tinker with your toddler’s nap schedule without also tinkering with her nighttime sleep. In a way, that’s reassuring: It makes it harder to screw up the nap in a way that leads to a sleep-deprived child. If daytime sleep is lacking, your child will probably make up for it at night.

A sleeping child looks blissfully relaxed, but beneath that quiet exterior, the body is doing some incredible work. New concepts and vocabulary get stitched into the brain. The immune system hones its ability to bust germs. And limbs literally stretch. Babies grew longer in the four days right after they slept more than normal, scientists reported in Sleep in 2011. Scientists don’t yet know if this important work happens selectively during naps or night sleep.

Right now, both my 4-year-old and 2-year-old take post-lunch naps (and on the absolute best of days, those naps occur in glorious tandem). Their siestas probably push their bedtimes back a bit. But that’s OK with all of us. Long spring and summer days make it hard for my girls to go to sleep at 7:30 p.m. anyway. The times I’ve optimistically tried an early bedtime, my younger daughter insists I look out the window to see the obvious: “The sky is awake, Mommy.”