Tuesday, February 6, 2018

I could use some new brain cells. J.R.

Adult brains produce new cells in previously undiscovered area
A University of Queensland discovery may lead to new treatments for anxiety, depression and post-traumatic stress disorder (PTSD).

UQ Queensland Brain Institute scientists have discovered that new brain cells are produced in the adult amygdala, a region of the brain important for processing emotional memories.
Disrupted connections in the amygdala, an ancient part of the brain, are linked to anxiety disorders such as PTSD.

Queensland Brain Institute director Professor Pankaj Sah said the research marked a major shift in understanding the brain’s ability to adapt and regenerate.

“While it was previously known that new neurons are produced in the adult brain, excitingly this is the first time that new cells have been discovered in the amygdala,” Professor Sah said.

“Our discovery has enormous implications for understanding the amygdala’s role in regulating fear and fearful memories.”

Researcher Dr Dhanisha Jhaveri said the amygdala played a key role in fear learning – the process by which we associate a stimulus with a frightening event. “Fear learning leads to the classic flight or fight response – increased heart rate, dry mouth, sweaty palms – but the amygdala also plays a role in producing feelings of dread and despair, in the case of phobias or PTSD, for example,” Dr Jhaveri said.

“Finding ways of stimulating the production of new brain cells in the amygdala could give us new avenues for treating disorders of fear processing, which include anxiety, PTSD and depression.”

Previously new brain cells in adults were only known to be produced in the hippocampus, a brain region important for spatial learning and memory.
The discovery of that process, called neurogenesis, was made by Queensland Brain Institute founding director Professor Perry Bartlett, who was also involved in the latest research.

“Professor Bartlett’s discovery overturned the belief at the time that the adult brain was fixed and unable to change,” Professor Sah said.

“We have now found stem cells in the amygdala in adult mice, which suggests that neurogenesis occurs in both the hippocampus and the amygdala.

“The discovery deepens our understanding of brain plasticity and provides the framework for understanding the functional contribution of new neurons in the amygdala,” Professor Sah said.

The research, led by Professor Sah, Professor Bartlett and Dr Jhaveri, is published in Molecular Psychiatry.

Journal article:
http://www.nature.com/articles/mp2017134?WT.feed_name=subjects_biological-sciences

Source:
https://www.uq.edu.au/news/article/2017/08/adult-brains-produce-new-cells-previously-undiscovered-area

#anxiety #PTSD #amygdala #neurogenesis #interneurons #neuroscience
Animated Photo
Shared publiclyView activity

Thursday, February 1, 2018

Creating artificial life in the laboratory.

Dr. Frankenstein would be proud! Grin!
    456
Ann Donnelly was utterly confused the first time she examined her protein. On all counts, it behaved like an enzyme—a protein catalyst that speeds up biological reactions in cells. One could argue that enzymes, sculpted by eons of evolution, make life possible.
There was just one problem: her protein wasn’t evolved. It wasn’t even “natural.” It was, in fact, a completely artificial construct made with random sequences of DNA—something that’s never existed in nature before.
Donnelly was looking at the first artificial enzyme. An artificial protein that, by all accounts, should not be able to play nice with the intricate web of biochemical components and reactions that support life.
Yet when given to a mutant bacteria that lacks the ability to metabolize iron, the enzyme Syn-F4 filled in the blank. It kickstarted the bacteria’s iron processing pathways, naturally replacing the organism’s missing catalyzer—even though it was like nothing seen in life.
“That was an incredible and unbelievable moment for me—unbelievable to the point that I didn’t want to say anything until I had repeated it several times,” saysDonnelly, who published her results in Nature Chemical Biology.
The big picture? We are no longer bound by the chemical rules of nature. In a matter of months, scientists can engineer biological catalysts that normally take millions of years to evolve and fine-tune.
And with brand new enzymes comes the possibility of brand new life.
“Our work suggests that the construction of artificial genomes capable of sustaining cell life may be within reach,” says Dr. Michael Hecht at Princeton University, who led the study.

Cogs in the Machine

In 2011, Hecht was examining the limits of artificial biology.
At the time, many synthetic biologists had begun viewing biological processes as Lego blocks—something you could deconstruct, isolate, and reshuffle to build new constructs to your liking.
But Hecht was interested in something a little different. Rather than copy-and-pasting existing bits of genetic code across organisms, could we randomly build brand new molecular machines—proteins—from scratch?
Ultimately, it comes down to numbers. Like DNA, proteins are made up of a finite selection of chemical components: 20 amino acids, which combine in unique sequences into a chain.
For an average protein of 100 “letters,” the combinations are astronomically large. Yet an average cell produces only about 100,000 different proteins. Why is this? Do known proteins have some fundamental advantage? Or has evolution simply not yet had the chance to fashion even better workers? Could we tap into all that sweet potential?

A New Toolkit

Hecht and his group used a computer program to randomly generate one million new sequences. The chains were then folded into intricate 3D shapes based on the rules of biophysics.
The litmus test: one by one, the team inserted this new library of artificial proteins into mutant strains of bacteria that lacked essential genes. Without these genes, the mutants couldn’t adapt to harsh new environments—say, high salts—and died.
Remarkably, a small group of artificial proteins saved these mutants from certain death. It’s like randomly shuffling letters of known words and phrases to make new ones, yet somehow the new vocabulary makes perfect sense in an existing paragraph.
“The information encoded in these artificial genes is completely novel—it does not come from, nor is it significantly related to, information encoded by natural genes, and yet the end result is a living, functional microbe,” Michael Fisher, a graduate student in Hecht’s lab said at that time.
How?
A series of subsequent studies showed that many ofthese artificial proteins worked by boosting the cell’s backup biological processes—increasing the expression of genes that allows them to survive under selection pressure, for example.
The lab thought they had it nailed, until one protein came along: Syn-F4.

The New Catalyst

Syn-F4 is a direct descendant of one of the original “new” proteins. Earlier, the team discovered that the protein could help mutant bacteria thrive in a low-iron environment—just not very well.
Mimicking evolution, they then randomly mutated some of the “letters” of the protein into a library, and screened them for candidates that worked even better than the original for supporting low-iron life. The result? Syn-F4.
Donnelly took on the detective work. Normally, scientists can scour the sequence of a newly discovered protein, match it up to similar others and begin guessing how it works. This was obviously not possible here, since Syn-F4 doesn’t look like anything in nature.
The protein also escaped all attempts at crystallizing it, which would freeze it in 3D and allow scientists to figure out its structure.
In a clever series of experiments, Donnelly cracked the mystery. Like baking soda, Syn-F4 sped up iron-releasing reactions when mixed with the right ingredients. What’s more, it’s also extremely picky about its “clients”: it would only grab onto one structural form of an ingredient (say, a form that looks like your left hand) but not its mirror image (the right hand)—a hallmark of enzymes.
Several more years of digging unveiled a true gem: Syn-F4’s catalytic core, a short sequence hidden in the protein’s heart that makes its enzyme activity possible.
Mutating the protein’s letters one by one, Donnelly tenaciously picked those that rendered the protein inactive. This process eventually identified key letters that likely form the protein’s so-called “active site,” splattered across Syn-F4’s sequence.
Like petals on a rosebud, the process of folding brings these active letters together into a 3D core. And like the Syn-F4 itself, its structure looks completely different than that of any native enzyme.
The team explains, “We don’t think Syn-F4 is replacing the mutant bacteria’s missing enzymes; we think it’s working through a completely different mechanism.”
“We have a completely novel protein that’s capable of sustaining life by actually being an enzyme—and that’s just crazy,” says Hecht.

A New Life?

The implications are huge, says Dr. Justin Siegel at the UC Davis Genome Center, who wasn’t involved in the study.
Biotechnology routinely relies on enzymes for industrial applications, such as making drugs, fuel, and materials.
“We are no longer limited to the proteins produced by nature, and that we can develop proteins—that would normally have taken billions of years to evolve—in a matter of months,” he says.
But even more intriguing is this: the study shows that enzymes made naturally aren’t the solution to life. They’re just one solution.
This means we need to broaden our search for biochemical reactions and life, on Earth and elsewhere. After all, if multiple solutions exist for a biological problem, it makes it much more likely that one has already been found elsewhere in the universe.
Back on Earth, Hecht is extremely excited for the future of artificial life.
“We’re starting to code for an artificial genome,” hesays. Right now we’ve replaced about 0.1 percent of genes in a bacteria, so it’s just a weird organism with some funky artificial genes.
But suppose you replace 20 percent of genes, 30 percent, or more. Suppose a cohort of completely artificial enzymes runs the bacteria’s metabolism.
“Then it’s not just a weird E. coli with some artificial genes, then you have to say it’s a novel organism,” hesays.
456
Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she's also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

FOLLOW SHELLY:

   

Wednesday, January 24, 2018

Corina Marinescu>NEUROSCIENCE

Blocking a key enzyme may reverse memory loss
In the brains of Alzheimer’s patients, many of the genes required to form new memories are shut down by a genetic blockade, contributing to the cognitive decline seen in those patients.

MIT researchers have shown that they can reverse that memory loss in mice by interfering with the enzyme that forms the blockade. The enzyme, known as HDAC2, turns genes off by condensing them so tightly that they can’t be expressed.

For several years, scientists and pharmaceutical companies have been trying to develop drugs that block this enzyme, but most of these drugs also block other members of the HDAC family, which can lead to toxic side effects. The MIT team has now found a way to precisely target HDAC2, by blocking its interaction with a binding partner called Sp3.

“This is exciting because for the first time we have found a specific mechanism by which HDAC2 regulates synaptic gene expression,” says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the study’s senior author.

Blocking that mechanism could offer a new way to treat memory loss in Alzheimer’s patients. In this study, the researchers used a large protein fragment to interfere with HDAC-2, but they plan to seek smaller molecules that would be easier to deploy as drugs.

Source & further reading:
http://news.mit.edu/2017/blocking-key-enzyme-may-reverse-memory-loss-0808

Journal article:
http://www.cell.com/cell-reports/fulltext/S2211-1247(17)31023-9

#HDAC2 #alzheimersdisease #geneexpression #plasticity #memory #neuroscience
Photo
Shared publiclyView activity

Monday, January 22, 2018

OK, so how do we decrease brain acidity?

Increased brain acidity in psychiatric disorders
Your body’s acid/alkaline homeostasis, or maintenance of an adequate pH balance in tissues and organs, is important for good health. An imbalance in pH, particularly a shift toward acidity, is associated with various clinical conditions, such as a decreased cardiovascular output, respiratory distress, and renal failure. But is pH also associated with psychiatric disorders?

Researchers at the Institute for Comprehensive Medical Science at Fujita Health University in Japan, along with colleagues from eight other institutions, have identified decreased pH levels in the brains of five different mouse models of mental disorders, including models of schizophrenia, bipolar disorder, and autism spectrum disorder. This decrease in pH likely reflects an underlying pathophysiology in the brain associated with these mental disorders, according to the study published August 4th in the journal Neuropsychopharmacology.

While post-mortem studies have shown that the brains of patients with the abovementioned mental disorders tend to have a lower pH than those of controls, this phenomenon has been considered to be the result of secondary factors associated with the diseases rather than a primary feature of the diseases themselves. Secondary factors that confound the observation of a decreased brain pH level include antipsychotic treatments and agonal experiences associated with these disorders.

Dr. Miyakawa and his colleagues performed a meta-analysis of existing datasets from ten studies to investigate the pH level of postmortem brains from patients with schizophrenia and bipolar disorder. They observed that patients with schizophrenia and bipolar disorder exhibited significantly lower brain pH levels than control participants, even when potential confounding factors were considered (i.e., postmortem interval, age at death, and history of antipsychotic use). “These factors may not be major factors causing a decrease in pH in the postmortem brains of patients with schizophrenia and bipolar disorder,” Miyakawa explains.

The researchers then conducted a systematic investigation of brain pH using five mouse models of psychiatric disorders, including models for schizophrenia, bipolar disorder, and autism spectrum disorders. All of the mice used in the study were drug-naive, with equivalent agonal states, postmortem intervals, and ages within each strain. The analyses revealed that in all five mouse models, brain pH was significantly lower than that in the corresponding controls. In addition, the levels of lactate were also elevated in the brains of the model mice, and a significant negative correlation was found between brain pH and lactate levels. The increase in lactate may explain the decreased brain pH levels, as lactate is known to act as a strong acid.

Miyakawa suggests that, “while it is technically impossible to completely exclude confounding factors in human studies, our findings in mouse models strongly support the notion that decreased pH associated with increased lactate levels reflects an underlying pathophysiology, rather than a mere artifact, in at least a subgroup of patients with these mental disorders.”

Changes in the brain pH level have been considered an artifact, therefore substantial effort has been made to match the tissue pH among study participants and to control the effect of pH on molecular changes in the postmortem brain. However, given that decreased brain pH is a pathophysiological trait of psychiatric disorders, these efforts could have unwittingly obscured the specific pathophysiological signatures that are potentially associated with changes in pH, such as neuronal hyper-excitation and inflammation, both of which have been implicated in the etiology of psychiatric disorders. Therefore, the present study highlighting that decreased brain pH is a shared endophenotype of psychiatric disorders has significant implications on the entire field of studies on the pathophysiology of mental disorders.

This research raises new questions about changes in brain pH. For example, what are the mechanisms through which lactate is increased and pH is decreased? Are specific brain regions responsible for the decrease in pH? Is there functional significance to the decrease in brain pH observed in psychiatric disorders, and if so, is it a cause or result of the onset of the disorder? Further studies are needed to address these issues.

Source:
https://www.eurekalert.org/pub_releases/2017-08/fhu-iba080717.php

Journal article:
https://www.nature.com/articles/npp2017167

#psychiatricdisorders #phbalance #schizophrenia #bipolardisorder #lactate #neuroscience #research
Photo
Shared publiclyView activity

Sunday, January 21, 2018

Warning! Machines are capable of self learning!

Gentle readers:
 I am an easygoing kind of person and it takes something serious to push me into writing warnings! I believe that machines with the ability to teach machines at super speed will lead to serious problems for the human race in the very near future. If that does not shake you up a little how about machines teaching themselves at lightning speed? Need more information? Read the following article. J.R. ---
-------------------------------------
“The easier it is to communicate, the faster change happens.” – James Burke, Science Historian
During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.
As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.
“I find it remarkable that it is improving this rapidly,” said one Tesla owner.
Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.
Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.
“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.
“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”
According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.
“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.
Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.
“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”
Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.
Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.
This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.
One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.
A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.
As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.
As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.
Image Credit: igor kisselev / Shutterstock.com
1K
Aaron Frank is a writer and speaker and one of the earliest hires at Singularity University. Aaron is focused on the intersection of emerging technologies and accelerating change and is fascinated by the impact that both will have on business, society, and culture.

As a writer, his articles have appeared online in Vice's Motherboard, Wired UK and Forbes. As a speaker, Aaron has lectured fo...

FOLLOW AARON:

  

Thursday, January 18, 2018

Wonderful pictures of the Earth and the Moon!


Visit Earth Blog for a wonderful picture of the Earth and the Moon.


https://earthspacecircle.blogspot.ca/2016/07/earth-and-moon-seen-by-dscovr.html



On July 5, 2016, the Moon passed between NOAA's DSCOVR satellite and Earth. NASA's EPIC camera aboard DSCOVR snapped these images over a period of about four hours. In this set, the far side of the Moon, which is never seen from Earth, passes by. In the backdrop, Earth rotates, starting with the Australia and Pacific and gradually revealing Asia and Africa.

Image Credit: NASA/NOAA

DO YOU CONSIDER YOURSELF INTELLIGENT? GET OVER IT!

     Do you consider yourself intelligent? If yes, how about explaining the concept of eternity?....... Not easy, is it?  I am a perpetual s...