Oxygen atoms from Earth bombard the moon

Life on Earth may have made its mark on the moon billions of years before Neil Armstrong’s famous first step.

Observations by Japan’s moon-orbiting Kaguya spacecraft suggest that oxygen atoms from Earth’s upper atmosphere bombard the moon’s surface for a few days each month. This oxygen onslaught began in earnest around 2.4 billion years ago when photosynthetic microbes first flourished (SN Online: 9/8/15), planetary scientist Kentaro Terada of Osaka University in Japan and colleagues propose January 30 in Nature Astronomy.

The oxygen atoms begin their incredible journey in the upper atmosphere, where they are ionized by ultraviolet radiation, the researchers suggest. Electric fields or plasma waves accelerate the oxygen ions into the magnetic cocoon that envelops Earth. One side of that magnetosphere stretches away from the sun like a flag in the wind. For five days each lunar cycle, the moon passes through the magnetosphere and is barraged by earthly ions, including oxygen.

Based on Kaguya’s measurements of this space-traveling oxygen in 2008, Terada and colleagues estimate that at least 26,000 oxygen ions per second hit each square centimeter of the lunar surface during the five-day period. The uppermost lunar soil may, therefore, preserve bits of Earth’s ancient atmosphere, the researchers write, though determining which atoms blew over from Earth or the sun would be difficult.

A diet of corn turns wild hamsters into cannibals

The first sign that something was wrong was that the female hamsters were really active in their cages. These were European hamsters, a species that is endangered in France and thought to be on the decline in the rest of their Eurasian range. But in a lab at the University of Strasbourg in France, the hamsters were oddly aggressive, and they didn’t give birth in their nests.

Mathilde Tissier, a conservation biologist at the University of Strasbourg, remembers seeing the newly born pups alone, spread around in the cages, while their mothers ran about. Then, the mother hamsters would take their pups and put them in the piles of corn they had stored in the cage, Tissier says, and eat their babies alive.

“I had some really bad moments,” she says. “I thought I had done something wrong.”

Tissier and her colleagues had been looking into the effect of wheat- and corn-based diets in European hamsters because the rodent’s population in France was quickly disappearing. It now numbers only about 1,000 animals, most of which live in farm fields. The hamsters, being burrowers, are important for the local ecosystem and can promote soil health. But more than that, they’re an umbrella species, Tissier notes. Protect them, and their habitat, and there will be benefits for the many other farmland species that are declining.

A typical corn field is some seven times larger than the home range for a female hamster, so the animals that live in these agricultural areas eat mostly corn — or whatever other crop is growing in that field. But not all crops provide the same level of nutrition, and Tissier and her colleagues were curious about how that might affect the hamsters. Perhaps there would be differences in litter size or pup growth, they surmised. So they began an experiment, feeding hamsters wheat or corn in the lab, with either clover or earthworms to better reflect the animals’ normal, omnivorous diets.

“We thought [the diets] would create some [nutritional] deficiencies,” Tissier says. But instead, Tissier and her colleagues saw something very different. All the female hamsters were able to successfully reproduce, but those fed corn showed abnormal behaviors before giving birth. They then gave birth outside their nests and most ate their young on the first day after birth. Only one female weaned her pups, though that didn’t have a happy ending either — the two brothers ate their female siblings, Tissier and her colleagues report January 18 in the Proceedings of the Royal Society B.

Tissier spent a year trying to figure out what was going on. Hamsters and other rodents will eat their young, but it is usually when a baby has died and the mother hamster wants to keep her nest clean. They don’t normally eat healthy babies alive. The researchers reared more hamsters in the lab, this time supplementing their maize and earthworm diet with a solution of niacin. This time, the hamsters raised their young normally, and not as a snack.

Unlike wheat, corn lacks a number of micronutrients, including niacin. In people who subsist on a diet of mostly corn, that niacin deficiency can result in a disease called pellagra. The disease emerged in the 1700s in Europe after corn became a dietary staple. People with pellagra experienced horrible rashes, diarrhea and dementia. Until the disease’s cause was identified in the mid-20th century, millions of people suffered and thousands died. (The meso-Americans who domesticated corn largely did not have this problem because they processed corn with a technique called nixtamalization, which frees bound niacin in corn and makes it available as a nutrient. The Europeans who brought corn back to their home countries didn’t bring back this process.)

The European hamsters fed corn-based diets exhibited symptoms similar to pellagra, and this is probably happening in the wild, Tissier says. She notes that officials with the French National Office for Hunting and Wildlife have seen hamsters in the wild subsisting on mostly corn and eating their pups.

Tissier and her colleagues are now working to find ways to improve diversity in agricultural systems, so that hamsters — and other creatures — can eat a more well-balanced diet. “The idea is not only to protect the hamster,” she says, “but to protect the entire biodiversity and to restore good ecosystems, even in farmland.”

Speech recognition has come a long way in 50 years

Computers that hear

Computer engineers have dreamed of a machine that would translate speech into something that a vacuum tube or transistor could understand. Now at last, some promising hardware is being developed…. It is still a long way from the kind of science fiction computer that can understand sentences or long speeches. — Science News, March 4, 1967

Update
That 1967 device knew the words one through nine. Earlier speech recognition devices sliced a word into segments and analyzed them for absolute loudness. But this machine, developed by Genung L. Clapper at IBM, identified the volume of a pitch segment compared with its neighbors to account for the variability of human speech. Today’s speech recognition goes much further, dividing words into distinct units of sound and syntax. The software decodes speech by applying pattern recognition and a statistical method called the hidden Markov model to the sounds. We rely on speech recognition to open an app to order groceries or to send a text to ask someone at home if we need more milk. Hello, Siri.

Nudging people to make good choices can backfire

Nudges are a growth industry. Inspired by a popular line of psychological research and introduced in a best-selling book a decade ago, these inexpensive behavior changers are currently on a roll.

Policy makers throughout the world, guided by behavioral scientists, are devising ways to steer people toward decisions deemed to be in their best interests. These simple interventions don’t force, teach or openly encourage anyone to do anything. Instead, they nudge, exploiting for good — at least from the policy makers’ perspective — mental tendencies that can sometimes lead us astray.

But new research suggests that low-cost nudges aimed at helping the masses have drawbacks. Even simple interventions that work at first can lead to unintended complications, creating headaches for nudgers and nudgees alike.

Nudge proponents, an influential group of psychologists and economists known as behavioral economists, follow a philosophy they dub libertarian paternalism. This seemingly contradictory phrase refers to a paternalistic desire to promote certain decisions via tactics that preserve each person’s freedom of choice. Self-designated “choice architects” design nudges to protect us from inclinations that might not serve us well, such as overconfidence, limited attention, a focus on now rather than later, the tendency to be more motivated by losses than gains and intuitive flights of fancy.

University of Chicago economist Richard Thaler and law professor Cass Sunstein, now at Harvard University, triggered this policy movement with their 2008 book Nudge. Thaler and Sunstein argued that people think less like an economist’s vision of a coldly rational, self-advancing Homo economicus than like TV’s bumbling, doughnut-obsessed Homer Simpson.
Choice architects like to prod with e-mail messages, for example, reminding a charity’s past donors that it’s time to give or telling tardy taxpayers that most of their neighbors or business peers have paid on time. To nudge healthier eating, these architects redesign cafeterias so that fruits and vegetables are easier to reach than junk food.

A popular nudge tactic consists of automatically enrolling people in organ-donation programs and retirement savings plans while allowing them to opt out if they want. Until recently, default choices for such programs left people out unless they took steps to join up. For organ donation, the nudge makes a difference: Rates of participation typically exceed 90 percent of adults in countries with opt-out policies and often fall below 15 percent in opt-in countries, which require explicit consent.

Promising results of dozens of nudge initiatives appear in two government reports issued last September. One came from the White House, which released the second annual report of its Social and Behavioral Sciences Team. The other came from the United Kingdom’s Behavioural Insights Team. Created by the British government in 2010, the U.K. group is often referred to as the Nudge Unit.

In a September 20, 2016, Bloomberg View column, Sunstein said the new reports show that nudges work, but often increase by only a few percentage points the number of people who, say, receive government benefits or comply with tax laws. He called on choice architects to tackle bigger challenges, such as finding ways to nudge people out of poverty or into higher education.

Missing from Sunstein’s comments and from the government reports, however, was any mention of a growing conviction among some researchers that well-intentioned nudges can have negative as well as positive effects. Accepting automatic enrollment in a company’s savings plan, for example, can later lead to regret among people who change jobs frequently or who realize too late that a default savings rate was set too low for their retirement needs. E-mail reminders to donate to a charity may work at first, but annoy recipients into unsubscribing from the donor list.

“I don’t want to get rid of nudges, but we’ve been a bit too optimistic in applying them to public policy,” says behavioral economist Mette Trier Damgaard of Aarhus University in Denmark.

Nudges, like medications for physical ailments, require careful evaluation of intended and unintended effects before being approved, she says. Policy makers need to know when and with whom an intervention works well enough to justify its side effects.

Default downer
That warning rings especially true for what is considered a shining star in the nudge universe — automatic enrollment of employees in retirement savings plans. The plans, called defaults, take effect unless workers decline to participate.

No one disputes that defaults raise participation rates in retirement programs compared with traditional plans that require employees to sign up on their own. But the power of opt-out plans to kick-start saving for retirement stayed under the radar until it was reported in the November 2001 Quarterly Journal of Economics.

When the company in the 2001 study — a health and financial services firm with more than 10,000 employees — switched from voluntary to automatic enrollment in a retirement savings account, employee participation rose from about 37 percent to nearly 86 percent.

Similar findings over the next few years led to passage of the U.S. Pension Protection Act of 2006, which encouraged employers to adopt automatic pension enrollment plans with increasing savings contributions over time.

But little is known about whether automatic enrollees are better or worse off as time passes and their personal situations change, says Harvard behavioral economist Brigitte Madrian. She coauthored the 2001 paper on the power of default savings plans.

Although automatic plans increase savings for those who otherwise would have squirreled away little or nothing, others may lose money because they would have contributed more to a self-directed retirement account, Madrian says. In some cases, having an automatic savings account may encourage irresponsible spending or early withdrawals of retirement money (with penalties) to cover debts. Such possibilities are plausible but have gone unstudied.

In line with Madrian’s concerns, mathematical models developed by finance professor Bruce Carlin of the University of California, Los Angeles and colleagues suggest that people who default into retirement plans learn less about money matters, and share less financial information with family and friends, than those who join plans that require active investment choices.

Opt-out savings programs “have been oversimplified to the public and are being sold as a great way to change behavior without addressing their complexities,” Madrian says. Research needs to address how well these plans mesh with individuals’ personalities and decision-making styles, she recommends.
Delay and regret
By comparing procrastinators with more decisive folks in one large retirement system, economist Jeffrey Brown examined how individual differences influence whether people join and stay happy with opt-out savings programs. Procrastinators were not only more likely to end up in a default plan but also more apt to regret that turn of events down the road, says Brown, of the University of Illinois at Urbana-Champaign.

Among state employees at the university who were offered any of three retirement plans, those who delayed making decisions were particularly likely to belong to a default plan and to want to switch to another plan, Brown and colleagues reported in September 2016 in the Journal of Financial Economics. These plans serve as a substitute for Social Security and often represent an employee’s largest financial asset. The default plan is generous toward those who stay long enough to retire from the state system but less so to those who leave early. A second plan allows for a larger cash refund upon leaving the system early. A third plan enables savers to direct contributions to any of a variety of investments. Being dumped into the default plan isn’t always the best option, especially because initial plan choices are permanent.

More than 6,000 employees who joined the retirement system in or after 1999 completed e-mail questionnaires in 2012. When asked what they would do if they could go back and redo their savings choice, 17 percent of defaulters reported a strong desire to change plans. Only about 7 percent of those who actively selected a plan and 8 percent of those who intentionally chose the default wanted to change.

The likelihood of having been assigned to the default plan and wanting to switch to another plan increased steadily as employees reported higher levels of procrastination. Implications of this finding are not entirely clear, Madrian says. Individuals in the default savings plan either by choice or procrastination may, for instance, regret lots of events in their lives. If so, they can’t easily be compared with less regretful folks who chose another plan.

Requiring people to make an active choice of a retirement plan, even if they’re procrastinators, might reduce regret down the road, Madrian suspects. But given a complex, high-stakes choice — such as that faced by Illinois university employees — “it may still make sense to set a default option even if some individuals who end up in the default will regret it later.”

Researchers need to determine how defaults and other nudges instigate behavior changes before unleashing them on the public, says philosopher of science Till Grüne-Yanoff of the Royal Institute of Technology in Stockholm.
Hidden costs
Sometimes well-intentioned, up-front attempts to get people to do what seems right come back to bite nudgers on the bottom line.

Consider e-mail prompts and reminders. Although nudges were originally conceived to encourage people to accept an option unthinkingly, simple attempts to curb forgetfulness and explain procedures now get folded into the nudge repertoire. Short-term success stories abound for these inexpensive messages. The 2016 report of the U.S. Social and Behavioral Sciences Team cites a case in which e-mails sent by the Department of Education to student-loan recipients, which described how to apply for a federal repayment plan, led 6,000 additional borrowers to sign up for the plan in the following three months, relative to borrowers who did not receive the explanatory e-mail. Messages were tailored to borrowers’ circumstances, such as whether they previously expressed interest in the payback plan or had stopped making loan repayments.

The U.K. Behavioural Insights Team — now a global company with offices in Britain, North America, Australia and Singapore — also sees value in short, informational nudges.

One of the company’s projects produced an unexpected twist. Low-income New Orleans residents who hadn’t seen a primary care physician in more than two years — 21,442 of them — received one of three text messages to set up a free medical appointment. Telling people that they had been selected for a free appointment worked best, leading 1.4 percent of recipients to sign up, versus 1 percent of those who got an information-only text. But a text asking people to “take care of yourself so you can take care of the ones you love” backfired, resulting in only 0.7 percent of recipients making appointments. Uptake for all three groups was low, but the study suggested that nudges that unwittingly trigger bad feelings (guilt or shame) can easily go awry, Aarhus University’s Damgaard says.
A case in point is a study submitted for publication by Damgaard and behavioral economist Christina Gravert of the University of Gothenburg in Sweden. E-mailed donation reminders sent to people who had contributed to a Danish anti-poverty charity increased the number of donations in the short term, but also triggered an upturn in the number of people unsubscribing from the list.

People’s annoyance at receiving reminders perceived as too frequent or pushy cost the charity money over the long haul, Damgaard holds. Losses of list subscribers more than offset the financial gains from the temporary uptick in donations, she and Gravert conclude.

“Researchers have tended to overlook the hidden costs of nudging,” Damgaard says.

In one experiment, more than 17,000 previous donors to a Danish charity received an e-mail asking them to donate within 10 days. About half received an additional reminder one week later. Reminders yielded 46 donations, versus 30 donations from people sent only one e-mail. But over the next month, 318 reminded donors unsubscribed from the e-mail list, as opposed to 186 of those who received one e-mail. To Damgaard and Gravert, reminders were money losers — especially if sent more than once.

A second experiment examined more than 43,000 Danish charity donors split into three groups. The number of unsubscribers reached 71 among those sent an e-mail informing them that digital reminders would be sent every month. Among those receiving the same e-mail plus an announcement that only one reminder would be sent in the next three months, 44 people abandoned the mailing list. That’s what a digital sigh of relief looks like. An e-mail that combined a notice of monthly reminders with a promise of a donation from an anonymous sponsor for every mailing list donation slightly lowered annoyance at the prospect of monthly reminders — 52 unsubscribed.

The limits of nudge
There are at least two ways to think about unintended drawbacks to nudges. Behavioral economists including Damgaard take an optimistic stance. They see value in determining how nudges work over the long haul, for better and worse. In that way, researchers can target people most likely to benefit from specific nudges. Few schemes to change behavior, including nudges, alter people’s lives for the better in a big and lasting way, cautions Harvard behavioral economist and nudge proponent Todd Rogers. “One of the most important questions in behavioral science right now is, how do we induce persistent behavior change?”

But those already critical of libertarian paternalism say that new findings back up their pessimistic view of what nudges can do. When past charity donors in Denmark fork over more money in response to an e-mail reminder and then bolt from the mailing list, as reported by Damgaard and Gravert, they’re demonstrating that even a small-scale nudge can trigger resistance, says political scientist Frank Mols of the University of Queensland in Brisbane, Australia. “It verges on ridiculous to claim that nudges can change attitudes or behavior related to huge social problems, such as crime or climate change,” he says.

Nudges wrongly assume that each person makes decisions in isolation, Mols contends. People belong to various groups that frame the way they make sense of the world, he says. Rather than nudging, lasting behavior change entails persuasion techniques long exploited by advertisers: altering how people view their social identities. Coors beer, for instance, has long been marketed to small-town folks and city dwellers alike as the choice of rugged, outdoorsy individualists.
In the noncommercial realm, Mols points to a successful 2006 campaign to reduce water use in Queensland during a severe drought. Average per capita water use dropped substantially and stayed lower after the drought broke in 2009. That’s because the campaign included advertisements targeting citizens’ view of themselves as “Queenslanders,” he says. A good Queenslander became redefined as a “water-wise” person who consumed as little of the resource as possible.

Queensland’s persuasive approach to water conservation avoided ethical concerns that dog nudges, Mols adds. Choice architects’ conviction that people possess biased minds in need of expert guidance to achieve good lives cuts off debate about what constitutes a good life, he argues.

Elspeth Kirkman, a policy implementation specialist who heads the U.K. Behavioural Insights Team’s North American office in New York City, sees no ethical problem with nudges that people can reject anytime they want. But she acknowledges that ethical gray areas exist. “It’s not always clear when an intervention is a nudge and when it’s coercive manipulation,” she says. Nudge carefully and monitor an intervention’s intended and unintended effects for as long as possible, Kirkman advises.

Even amid calls for caution, nudges are expanding their reach. With input from the Behavioural Insights Team, a U.K. law passed in March 2016 and slated to take effect in April 2018 imposes a soft drink tax that rises with increasing sugar content. The law aims to encourage soft drink companies to switch from high-sugar products to artificially sweetened and low-sugar beverages in an effort to reduce obesity. The U.K. soft drink firm Lucozade Ribena Suntory and the retail company Tesco announced last November plans to cut sugar in soft drinks by at least 50 percent to escape the looming tax.

The law might prod consumers to change too, if companies stand their ground but raise prices of high-sugar drinks due to the new tax, Kirkman predicts.

In nudges as in life, though, the best-laid plans can tank. Perhaps scientists will discover serious health risks in artificial sweeteners currently considered safe, reviving soda makers’ sugar dependence. Maybe a black market for old-school soda will pop up in Britain, sending soft drink lovers to back-alley Coke dealers for sugar fixes.

The law of unintended consequences is always taxing.
This article appears in the March 18, 2017, issue of Science News with the headline, “Nudge Backlash: Steering people’s decisions with simple tactics can come with a downside.”

Neandertals had an eye for patterns

Neandertals knew how to kick it up a couple of notches. Between 38,000 and 43,000 years ago, these close evolutionary relatives of humans added two notches to five previous incisions on a raven bone to produce an evenly spaced sequence, researchers say.

This visually consistent pattern suggests Neandertals either had an eye for pleasing-looking displays or saw some deeper symbolic meaning in the notch sequence, archaeologist Ana Majkić of the University of Bordeaux, France, and her colleagues report March 29 in PLOS ONE.

Notches added to the bone, unearthed in 2005 at a Crimean rock shelter that previously yielded Neandertal bones, were shallower and more quickly dashed off than the original five notches. But additions were carefully placed, resulting in relatively equal spacing of all notches.

Although bone notches may have had a practical use, such as fixing thread on an eyeless needle, the even spacing suggests Neandertals had a deeper meaning in mind — or at least knew what looked good.

Previous discoveries suggest Neandertals made eagle-claw necklaces and other personal ornaments, possibly for use in rituals (SN: 4/18/15, p. 7).

Einstein’s latest anniversary marks the birth of modern cosmology

First of two parts

Sometimes it seems like every year offers an occasion to celebrate some sort of Einstein anniversary.

In 2015, everybody lauded the 100th anniversary of his general theory of relativity. Last year, scientists celebrated the centennial of his prediction of gravitational waves — by reporting the discovery of gravitational waves. And this year marks the centennial of Einstein’s paper establishing the birth of modern cosmology.

Before Einstein, cosmology was not very modern at all. Most scientists shunned it. It was regarded as a matter for philosophers or possibly theologians. You could do cosmology without even knowing any math.

But Einstein showed how the math of general relativity could be applied to the task of describing the cosmos. His theory offered a way to study cosmology precisely, with a firm physical and mathematical basis. Einstein provided the recipe for transforming cosmology from speculation to a field of scientific study.

“There is little doubt that Einstein’s 1917 paper … set the foundations of modern theoretical cosmology,” Irish physicist Cormac O’Raifeartaigh and colleagues write in a new analysis of that paper.

Einstein had pondered the implications of his new theory for cosmology even before he had finished it. General relativity was, after all, a theory of space and time — all of it. Einstein’s showed that gravity — the driving force sculpting the cosmic architecture — was simply the distortion of spacetime geometry generated by the presence of mass and energy. (He constructed an equation to show how spacetime geometry, on the left side of the equation, was determined by the density of mass-energy, the right side.) Since spacetime and mass-energy account for basically everything, the entire cosmos ought to behave as general relativity’s equation required.

Newton’s law of gravity had posed problems in that regard. If every mass attracted every other mass, as Newton had proclaimed, then all the matter in the universe ought to have just collapsed itself into one big blob. Newton suggested that the universe was infinite, filled with matter, so that attraction inward was balanced by the attraction of matter farther out. Nobody really bought that explanation, though. For one thing, it required a really precise arrangement: One star out of place, and the balance of attractions disappears and the universe collapses. It also required an infinity of stars, making it impossible to explain why it’s dark at night. (There would be a star out there along every line of sight at all times.)

Einstein hoped his theory of gravity would resolve the cosmic paradoxes of Newtonian gravity. So in early 1917, less than a year after his complete paper on the general theory was published, he delivered a short paper to the Prussian Academy of Sciences outlining the implications of his theory for cosmology.
In that paper, titled “Cosmological Considerations in the General Theory of Relativity,” he started by noting the problems posed by using Newton’s gravity to describe the universe. Einstein showed that Newton’s gravity would require a finite island of stars sitting in an infinite space. But over time such a collection of stars would evaporate. That problem could be avoided, though, if the universe turned out not to be infinite. Instead, Einstein said, everything would be fine if the universe is finite. Big, sure, but curved in such a way that it closed on itself, like a sphere.

Einstein’s mathematical challenge was to show that such a finite cosmic spacetime would be static and stable. (In those days nobody knew that the universe was expanding.) He assumed that on a large enough scale, the distribution of matter in this universe could be considered uniform. (Einstein said it was like viewing the Earth as a smooth sphere for most purposes, even though its terrain is full of complexities on smaller distance scales.) Matter’s effect on spacetime curvature would therefore be pretty much constant, and the universe’s overall condition would be unchanging.

All this made sense to Einstein because he had a limited view of what was actually going on in the cosmos. Like many scientists in those days, he believed the universe was basically just the Milky Way galaxy. All the known stars moved fairly slowly, consistent with his belief in a spherical cosmos with uniformly distributed mass. Unfortunately, general relativity’s math didn’t work if that was the case — it suggested the universe would not be stable. Einstein realized, though, that his view of the static spherical universe would succeed if he added a term to his original equation.

In fact, there were good reasons to include the term anyway. O’Raifeartaigh and colleagues point out that in his earlier work on general relativity, Einstein remarked in a footnote that his equation technically permitted the inclusion of an additional term. That didn’t seem to matter at the time. But in his cosmology paper, Einstein found that it was just the thing his equation needed to describe the universe properly (as Einstein then supposed the universe to be). So he added that factor, designated by the Greek letter lambda, to the left-hand side of his basic general relativity equation.

“That term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small velocities of the stars,” Einstein wrote in his 1917 paper. As long as the magnitude of this new term on the geometry side of the equation was small enough, it would not alter the theory’s predictions for planetary motions in the solar system.

Einstein’s 1917 paper demonstrated the mathematical effectiveness of lambda (also called the “cosmological constant”) but did not say much about its physical interpretation. In another paper, published in 1918, he commented that lambda represented a negative mass density — it played “the role of gravitating negative masses which are distributed all over the interstellar space.” Negative mass would counter the attractive gravity and prevent all the matter in Einstein’s spherical finite universe from collapsing.

As everybody now knows, though, there is no danger of collapse, because the universe is not static to begin with, but rather is rapidly expanding. After Edwin Hubble had established such expansion, Einstein abandoned lambda as unnecessary (or at least, set it equal to zero in his equation). Others built on Einstein’s foundation to derive the math needed to make sense of Hubble’s discovery, eventually leading to the modern view of an expanding universe initiated by a Big Bang explosion.

But in the 1990s, astronomers discovered that the universe is not only expanding, it is expanding at an accelerating rate. Such acceleration requires a mysterious driving force, nicknamed “dark energy,” exerting negative pressure in space. Many experts believe Einstein’s cosmological constant, now interpreted as a constant amount of energy with negative pressure infusing all of space, is the dark energy’s true identity.

Einstein might not have been surprised by all of this. He realized that only time would tell whether his lambda would vanish to zero or play a role in the motions of the heavens. As he wrote in 1917 to the Dutch physicist-astronomer Willem de Sitter: “One day, our actual knowledge of the composition of the fixed-star sky, the apparent motions of fixed stars, and the position of spectral lines as a function of distance, will probably have come far enough for us to be able to decide empirically the question of whether or not lambda vanishes.”

Immune cells play surprising role in steady heartbeat

Immune system cells may help your heart keep the beat. These cells, called macrophages, usually protect the body from invading pathogens. But a new study published April 20 in Cell shows that in mice, the immune cells help electricity flow between muscle cells to keep the organ pumping.

Macrophages squeeze in between heart muscle cells, called cardiomyocytes. These muscle cells rhythmically contract in response to electrical signals, pumping blood through the heart. By “plugging in” to the cardiomyocytes, macrophages help the heart cells receive the signals and stay on beat.
Researchers have known for a couple of years that macrophages live in healthy heart tissue. But their specific functions “were still very much a mystery,” says Edward Thorp, an immunologist at Northwestern University’s Feinberg School of Medicine in Chicago. He calls the study’s conclusion that macrophages electrically couple with cardiomyocytes “paradigm shifting.” It highlights “the functional diversity and physiologic importance of macrophages, beyond their role in host defense,” Thorp says.

Matthias Nahrendorf, a cell biologist at Harvard Medical School, stumbled onto this electrifying find by accident.

Curious about how macrophages impact the heart, he tried to perform a cardiac MRI on a mouse genetically engineered to not have the immune cells. But the rodent’s heartbeat was too slow and irregular to perform the scan.
These symptoms pointed to a problem in the mouse’s atrioventricular node, a bundle of muscle fibers that electrically connects the upper and lower chambers of the heart. Humans with AV node irregularities may need a pacemaker to keep their heart beating in time. In healthy mice, researchers discovered macrophages concentrated in the AV node, but what the cells were doing there was unknown.
Isolating a heart macrophage and testing it for electrical activity didn’t solve the mystery. But when the researchers coupled a macrophage with a cardiomyocyte, the two cells began communicating electrically. That’s important, because the heart muscle cells contract thanks to electrical signals.

Cardiomyocytes have an imbalance of ions. While in the resting state, there are more positive ions outside the cell than inside, but when a cardiomyocyte receives an electrical signal from a neighboring heart cell, that distribution switches. This momentary change causes the cell to contract and send the signal on to the next cardiomyocyte.

Scientists previously thought that cardiomyocytes were capable of this electrical shift, called depolarization, on their own. But Nahrendorf and his team found that macrophages aid in the process. Using a protein, a macrophage hooks up to a cardiomyocyte. This protein directly connects the inside of these cells to each other, allowing macrophages to transfer positive charges, giving cardiomyocytes a boost kind of like with a jumper cable. This makes it easier for the heart cells to depolarize and trigger the heart contraction, Nahrendorf says.

“With the help of the macrophages, the conduction system becomes more reliable, and it is able to conduct faster,” he says.

Nahrendorf and colleagues found macrophages within the AV node in human hearts as well but don’t know if the cells play the same role in people. The next step is to confirm that role and explore whether or not the immune cells could be behind heart problems like arrhythmia, says Nahrendorf.

Long naps lead to less night sleep for toddlers

Like most moms and dads, my time in the post-baby throes of sleep deprivation is a hazy memory. But I do remember feeling instant rage upon hearing a popular piece of advice for how to get my little one some shut-eye: “sleep begets sleep.” The rule’s reasoning is unassailable: To get some sleep, my baby just had to get some sleep. Oh. So helpful. Thank you, lady in the post office and entire Internet.

So I admit to feeling some satisfaction when I came across a study that found an exception to the “sleep begets sleep” rule. The study quite reasonably suggests there is a finite amount of sleep to be had, at least for the 50 Japanese 19-month-olds tracked by researchers.

The researchers used activity monitors to record a week’s worth of babies’ daytime naps, nighttime sleep and activity patterns. The results, published June 9, 2016, in Scientific Reports, showed a trade-off between naps and night sleep. Naps came at the expense of night sleep: The longer the nap, the shorter the night sleep, the researchers found. And naps that stretched late into the afternoon seemed to push back bedtime.

In this study, naps didn’t affect the total amount of sleep each child got. Instead, the distribution of sleep across day and night changed. That means you probably can’t tinker with your toddler’s nap schedule without also tinkering with her nighttime sleep. In a way, that’s reassuring: It makes it harder to screw up the nap in a way that leads to a sleep-deprived child. If daytime sleep is lacking, your child will probably make up for it at night.

A sleeping child looks blissfully relaxed, but beneath that quiet exterior, the body is doing some incredible work. New concepts and vocabulary get stitched into the brain. The immune system hones its ability to bust germs. And limbs literally stretch. Babies grew longer in the four days right after they slept more than normal, scientists reported in Sleep in 2011. Scientists don’t yet know if this important work happens selectively during naps or night sleep.

Right now, both my 4-year-old and 2-year-old take post-lunch naps (and on the absolute best of days, those naps occur in glorious tandem). Their siestas probably push their bedtimes back a bit. But that’s OK with all of us. Long spring and summer days make it hard for my girls to go to sleep at 7:30 p.m. anyway. The times I’ve optimistically tried an early bedtime, my younger daughter insists I look out the window to see the obvious: “The sky is awake, Mommy.”

Why create a model of mammal defecation? Because everyone poops

An elephant may be hundreds of times larger than a cat, but when it comes to pooping, it doesn’t take the elephant hundreds of times longer to heed nature’s call. In fact, both animals will probably get the job done in less than 30 seconds, a new study finds.

Humans would probably fit in that time frame too, says Patricia Yang, a mechanical engineering graduate student at the Georgia Institute of Technology in Atlanta. That’s because elephants, cats and people all excrete cylindrical poop. The size of all those animals varies, but so does the thickness of the mucus lining in each animal’s large intestine, so no matter the mammal, everything takes about the same time — an average of 12 seconds — to come out, Yang and her colleagues conclude April 25 in Soft Matter.

But the average poop time is not the real takeaway here (though it will make a fabulous answer to a question on Jeopardy one day). Previous studies on defecation have largely come from the world of medical research. “We roughly know how it happened, but not the physics of it,” says Yang.

Looking more closely at those physical properties could prove useful in a number of ways. For example, rats are often good models for humans in disease research, but they aren’t when it comes to pooping because rats are pellet poopers. (They’re not good models for human urination, either, because their pee comes out differently than ours, in high-speed droplets instead of a stream.)

Also, since the thickness of the mucus lining is dependent on animal size, it would be better to find a more human-sized stand-in. Such work could help researchers find new treatments for constipation and diarrhea, in which the mucus lining plays a key role, the researchers note.

Animal defecation may seem like an odd topic for a mechanical engineer to take on, but Yang notes that the principles of fluid dynamics apply inside the body and out. Her previous research includes a study on animal urination, finding that, as with pooping, the time it takes for mammals to pee also falls within a small window. (The research won her group an Ig Nobel Prize in 2015.)

And while many would find this kind of research disgusting, Yang does not. “Working with poop is not that bad, to be honest,” she says. “It’s not that smelly.” Plus, she gets to go to the zoo and aquarium for her research rather than be stuck in the lab.
But the research does involve a lot of poop — and watching it fall. For the study, the researchers timed the how long it took for animals to defecate and calculated the velocity of the feces of 11 species. They filmed dogs at a park and elephants, giant pandas and warthogs at Zoo Atlanta. They also dug up 19 YouTube videos of mammals defecating. Surprisingly, there are a lot of those videos available, though not many were actually good for the research. “We wanted a complete event, from beginning to end,” Yang notes. Apparently not everyone interested in pooping animals bothers to capture a feces’ full fall.

The researchers also examined feces from dozens of mammal species. (They fall into two classes: Carnivores defecate “sinkers,” since their feces are full of heavy indigestible ingredients like fur and bones. Herbivores defecate less-dense “floaters.”) And they considered the thickness and viscosity of the mucus that lines mammals’ intestines and helps everything move along as well the rectal pressure that pushes the material. All this information went into a mathematical model of mammal defecation — which revealed the importance of the mucus lining.

Yang isn’t done with this line of research. The model she and her colleagues created applies only to mammals that poop like we do. There’s still the pellet poopers, like rats and rabbits, and wombats, whose feces look like rounded cubes. “I would like to complete the whole set,” she says. And, “if you’ve got a good team, it’s fun.”

How a flamingo balances on one leg

A question flamingo researchers get asked all the time — why the birds stand on one leg — may need rethinking. The bigger puzzle may be why flamingos bother standing on two.

Balance aids built into the birds’ basic anatomy allow for a one-legged stance that demands little muscular effort, tests find. This stance is so exquisitely stable that a bird sways less to keep itself upright when it appears to be dozing than when it’s alert with eyes open, two Atlanta neuromechanists report May 24 in Biology Letters.
“Most of us aren’t aware that we’re moving around all the time,” says Lena Ting of Emory University, who measures what’s called postural sway in standing people as well as in animals. Just keeping the human body vertical demands constant sensing and muscular correction for wavering. Even standing robots “are expending quite a bit of energy,” she says. That could have been the case for flamingos, she points out, since effort isn’t always visible.
Ting and Young-Hui Chang of the Georgia Institute of Technology tested balance in fluffy young Chilean flamingos coaxed onto a platform attached to an instrument that measures how much they sway. Keepers at Zoo Atlanta hand-rearing the test subjects let researchers visit after feeding time in hopes of catching youngsters inclined toward a nap — on one leg on a machine. “Patience,” Ting says, was the key to any success in this experiment.

As a flamingo standing on one foot shifted to preen a feather or joust with a neighbor, the instrument tracked wobbles in the foot’s center of pressure, the spot where the bird’s weight focused. When a bird tucked its head onto its pillowy back and shut its eyes, the center of pressure made smaller adjustments (within a radius of 3.2 millimeters on average, compared with 5.1 millimeters when active).
Museum bones revealed features of the skeleton that might enhance stability, but bones alone didn’t tell the researchers enough. Deceased Caribbean flamingos a zoo donated to science gave a better view. “The ‘ah-ha!’ moment was when I said, ‘Wait, let’s look at it in a vertical position,’” Ting remembers. All of a sudden, the bird specimen settled naturally into one-legged lollipop alignment.

In flamingo anatomy, the hip and the knee lie well up inside the body. What bends in the middle of the long flamingo leg is not a knee but an ankle (which explains why to human eyes a walking flamingo’s leg joint bends the wrong way). The bones themselves don’t seem to have a strict on-off locking mechanism, though Ting has observed bony crests, double sockets and other features that could facilitate stable standing.

The bird’s distribution of weight, however, looked important for one-footed balance. The flamingo’s center of gravity was close to the inner knee where bones started to form the long column to the ground, giving the precarious-looking position remarkable stability. The specimen’s body wasn’t as stable on two legs, the researchers found.
Reinhold Necker of Ruhr University in Bochum, Germany, is cautious about calling one-legged stances an energy saver. “The authors do not consider the retracted leg,” says Necker, who has studied flamingos. Keeping that leg retracted could take some energy, even if easy balancing saves some, he proposes.

The new study takes an important step toward understanding how flamingos stand on one leg, but doesn’t explain why, comments Matthew Anderson, a comparative psychologist at St. Joseph’s University in Philadelphia. He’s found that more flamingos rest one-legged when temperatures drop, so he proposes that keeping warm might have something to do with it. The persistent flamingo question still stands.