Thursday, August 16, 2018

Dr. Ben Goertzel on artificial intelligence.

From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps

12,317
702
In the 15 years since I first introduced the term “artificial general intelligence” (AGI), the AI field has advanced tremendously. We now have self-driving cars, automated face recognition and image captioning, machine translation and expert AI game-players, and so much more.
However, these achievements remain essentially in the domain of “narrow AI”—AI that carries out tasks based on specifically-supplied data or rules, or carefully-created training situations. AIs that can generalize to unanticipated domains and confront the world as autonomous agents are still part of the road ahead.
The question remains: what do we need to do to get from today’s narrow AI tools, which have become mainstream in business and society, to the AGI envisioned by futurists and science fiction authors?

The Diverse Proto-AGI Landscape

While there is a tremendous diversity of perspectives and no shortage of technical and conceptual ideas on the path to AGI, there is nothing resembling an agreement among experts on the matter.
For example, Google DeepMind’s chief founder Demis Hassabis has long been a fan of relatively closely brain-inspired approaches to AGI, and continues to publish papers in this direction. On the other hand, theOpenCog AGI-oriented project that I co-founded in 2008 is grounded in a less brain-oriented approach—it involves neural networks, but also heavily leverages symbolic-logic representations and probabilistic inference, and evolutionary program learning.
The bottom line is, just as we have many different workable approaches to manned flight—airplanes, helicopters, blimps, rockets, etc.—there may be many viable paths to AGI, some of which are more biologically inspired than others. And, somewhat like the Wright brothers, today’s AGI pioneers are proceeding largely via experiment and intuition, in part because we don’t yet know enough useful theoretical laws of general intelligence to proceed with AGI engineering in a mainly theory-guided way; the theory of AGI is evolving organically alongside the practice.

Four (Not Actually So) Simple Steps From Here to AGI

In a talk I gave recently at Josef Urban’s AI4REASON lab in Prague (where my son Zar is doing his PhD, by the way) I outlined “Four Simple Steps to Human-Level AGI.” The title was intended as dry humor, as actually none of the steps are simple at all. But I do believe they are achievable within our lifetime, maybe even in the next 5-10 years. Better yet, each of the four steps is currently being worked on by multiple teams of brilliant people around the world, including but by no means limited to my own teams at SingularityNETHanson Robotics, and OpenCog.
The good news is, I don’t believe we need radically better hardware, nor radically different algorithms, nor new kinds of sensors or actuators. We just need to use our computers and algorithms in a slightly more judicious way by doing the following.

1) Make cognitive synergy practical

We have a lot of powerful AI algorithms today, but we don’t use them together in sufficiently sophisticated ways, so we lose much of the synergetic intelligence that could come from using them together. By contrast, the different components in the human brain are tuned to work together with exquisite feedback and interplay. We need to make systems that enable richer and more thorough coordination of different AI agents at various levels into one complex, adaptive AI network.
For instance, within the OpenCog architecture, we seek to realize this by making different learning and reasoning algorithms work together on the Atomspace Hypergraph, which allows for the creation of hybrid networks consisting of symbolic and subsymbolic segments. Our probabilistic logic engine, which handles facts and beliefs, our evolutionary program learning engine, which handles how-to knowledge, our deep neural nets for handling perception—all of these cooperate together in updating the same set of hypergraph nodes and links.
On a different level, in the SingularityNET blockchain-based AI network, we work toward cognitive synergy by allowing different AI agents using different internal algorithms to make requests of each other and share information and results. The idea is that the network of AI agents, using a customized token for exchanging value, can become an overall cognitive economy of minds with an emergent-level intelligence going beyond the intelligence of the individual agents. This is a modern blockchain-based realization of AI pioneer Marvin Minsky’s idea of intelligence as a “society of mind.”

2)  Bridge symbolic and subsymbolic AI

I believe AGI will most effectively be achieved via bridging of the algorithms used for low-level intelligence, such as perception and movement (e.g., deep neural networks), with the algorithms used for high-level abstract reasoning (such as logic engines).
Deep neural networks have had amazing successes lately in processing multiple sorts of data, including images, video, audio, and to a lesser extent, text. However, it is becoming increasingly clear that these particular neural net architectures are not quite right for handling abstract knowledge. Cognitive scientist and AI entrepreneur Gary Marcus has written articulately on this; SingularityNET AI researcher Alexey Potapov has recently reported on his experiments probing the limits of the generalization ability of current deep neural net frameworks.
My own intuition is that the shortest path to AGI will be to use deep neural nets for what they’re best at and to hybridize them with more abstract AI methods like logic systems, in order to handle more advanced aspects of human-like cognition.

3) Whole-organism architecture

Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.
The “whole organism architecture” (WHOA!!!) is a nice phrase introduced by my collaborator in robotics and mayhem, David Hanson. Currently, we are working with his beautiful robotic creation Sophia, whose software development I have led as a platform for experimenting with OpenCog and SingularityNET AI.
General intelligence does not require a human-like body, nor any specific body. However, if we want to create an AGI that manifests human-like cognition in particular and that can understand and relate to humans, then this AGI will needs to have a sense of the peculiar mix of cognition, emotion, socialization, perception, and movement that characterizes human reality. By far the best way for an AGI to get such a sense is for it to have the ability to occupy a body that at least vaguely resembles the human body.
The need for whole organism architecture ties in with the importance of experiential learning for AGI. In the mind of a human baby, all sorts of data are mixed up in a complex way, and the goals and objectives need to be figured out along with the categories, structures, and dynamics in the world. Even the distinction between self and other and the notion of a persistent object have to be learned. Ultimately, an AGI will need to do this sort of foundational learning for itself as well.
While it is not necessarily wrong to supply one’s AGI system with data from texts and databases, one still needs to build a system that interacts with, perceives, and explores the world autonomously and builds its own model of itself and the world. The semantics of everything it learns is then grounded in its own observations. If it learns about something abstract, like language or math, it has to be able to ground the semantics of that in its own life, as well as in the abstraction.
Experiential learning does not require robotics. But whole-organism robotics does provide an extremely natural venue for moving beyond today’s training-by-example AIs to experiential learning.

4)  Scalable meta-learning

AGI needs not just learning but also learning how to learn. An AGI will need to apply its reasoning and learning algorithms recursively to itself so as to automatically improve its functionality.
Ultimately, the ability to apply learning to improve learning should allow AGIs to progress far beyond human capability. At the moment, meta-learning remains a difficult but critical research pursuit. At SingularityNET, for instance, we are just now beginning to apply OpenCog’s AI to recognize patterns in its own effectiveness over time, so as to improve its own performance.

Toward Beneficial General Intelligence

If my perspective on AGI is correct, then once each of these four aspects is advanced beyond the current state, we’re going to be there—AGI at the human level and beyond.
I find this prospect tremendously exciting, and just a little scary. I am also aware that some observers, including big names like Stephen Hawking and Elon Musk, have expressed the reverse sentiment: more fear than excitement. I think nearly everyone who is serious about AGI development has put a lot of thought into the mitigation of the relevant risks.
One conclusion I have come to via my work on AI and robotics is: if we want our AGIs to absorb and understand human culture and values, the best approach will be to embed these AGIs in shared social and emotional contexts with people. I feel we are doing the right thing in our work with Sophia at Hanson Robotics; in recent experiments, we used Sophia as a meditation guide.
I have also been passionate in the last few years about working to ensure AI develops in a way that is egalitarian and participatory across the world economy, rather than in a manner driven mainly by the bottom lines of large corporations or the military needs of governments.  Put simply: I would rather have a benevolent, loving AI become superintelligent than a killer military robot, an advertising engine, or an AI hedge fund. This has been part of my motivation in launching the SingularityNET project—to use the power of AI and blockchain together to provide an open marketplace in which anyone on the planet can provide or utilize the world’s most powerful AI, for any purpose. If an AGI emerges from a participatory “economy of minds” of this nature, it is more likely to have an ethical and inclusive mindset coming out of the gate.
We are venturing into unknown territory here, not only intellectually and technologically, but socially and philosophically as well. Let us do our best to carry out this next stage of our collective voyage in a manner that is wise and cooperative as well as clever and fascinating.
702
Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the chief scientist at Hanson Robotics. Dr. Goertzel also serves as Chairman of the Artificial General Intelligence Society and the OpenCog Foundation. Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence, a subfield of AI oriented toward ...

Wednesday, August 15, 2018

My synapse points are over working!

Amazing New Brain Map of Every Synapse Points to the Roots of Thinking

2,412
776
Imagine a map of every single star in an entire galaxy. A map so detailed that it lays out what each star looks like, what they’re made of, and how each star is connected to another through the grand physical laws of the cosmos.
While we don’t yet have such an astronomical map of the heavens, thanks to a momentous study published last week in Neuron, there is now one for the brain.
If every neuron were a galaxy, then synapses—small structures dotted along the serpentine extensions of neurons—are its stars. In a technical tour-de-force, a team from the University of Edinburgh in the UK constructed the first detailed map of every single synapse in the mouse brain.
Using genetically modified mice, the team literally made each synapse light up under fluorescent light throughout the brain like the starry night. And similar to the way stars differ, the team found that synapses vastly varied, but in striking patterns that may support memory and thinking.
“There are more synapses in a human brain than there are stars in the galaxy. The brain is the most complex object we know of and understanding its connections at this level is a major step forward in unravelling its mysteries,”said lead author Dr. Seth Grant at the Center for Clinical Brain Sciences.
The detailed maps revealed a fundamental law of brain activity. With the help of machine learning, the team categorized roughly one billion synapses across the brain into 37 sub-types. Here’s the kicker: when sets of neurons receive electrical information, such as trying to decide between different solutions for a problem, unique sub-types of synapses spread out among different neurons unanimously spark with activity.
In other words: synapses come in types. And each type may control a thought, a decision, or a memory.
The neuroscience Twittersphere blew up.
“Whoa,” commented Dr. Ben Saunders simply at the University of Minnesota.
It’s an “amazing paper cataloguing the diversity and distribution of synapse sub-types across the entire mouse brain,” wrote neurogeneticist Dr. Kevin Mitchell. It “highlights [the] fact that synapses are the key computational elements in the nervous system.”

The Connectome Connection

The team’s interest in constructing the “synaptome”—the first entire catalog of synapses in the mouse brain—stemmed from a much larger project: the connectome.
In a nutshell, the connectome is all the neuronal connections within you. Evangelized by Dr. Sebastian Seung in a TED Talk, the connectome is the biological basis of who you are—your memories, personality, and how you reason and think. Capture the connectome, and one day scientists may be able to reconstruct you—something known as whole brain emulation.
Yet the connectome only describes how neurons functionally talk to each other. Where in the brain is it physically encoded?
Enter synapses. Neuroscientists have long known that synapses transmit information between neurons using chemicals and electricity. There’s also been hints that synapses are widely diverse in terms of what proteins they contain, but traditionally this diversity’s been mostly ignored. Until recently, most scientists believed that actual computations occur at the neuronal body—the bulbous part of a neuron from which branches reach out.
So far there’s never been a way to look at the morphology and function of synapses across the entire brain, the authors explained. Rather, we’ve been focused on mapping these crucial connection points in small areas.
“Synaptome mapping could be used to ask if the spatial distribution of synapses [that differ] is related to connectome architecture,” the team reasoned.
And if so, future brain emulators may finally have something solid to grasp onto.

SYNMAP

To construct the mouse synaptome, the authors developed a pipeline that they dubbed SYNMAP. They started with genetically modified mice, which have their synapses glow different colors. Each synapse is jam-packed with different proteins, with—stay with me—PSD-95 and SAP102 being two of the most prominent members. The authors added glowing proteins to these, which essentially acted as torches to light up each synapse in the brain.
Synaptome Mapping Pipeline
The team first bioengineered a mouse with glowing synapses under florescent light.
Next, they painstakingly chopped up the brain into slices, used a microscope to capture images of synapses in different brain regions, and pieced the photos back together.
An image of synapses looks like a densely-packed star map to an untrained eye. Categorizing each synapse is beyond the ability (and time commitment) of any human researcher, so the team took advantage of new machine learning classification techniques, and developed an algorithm that could parse these data—more than 10 terabytes—automatically, without human supervision.

A Physical Connectome

Right off the bat, the team was struck by the “exquisite patterns” the glowing synapses formed. One tagged protein—PSD-95—seemed to hang out on the more exterior portions of the brain where higher cognitive functions occur. Although there is overlap, the other glowing protein preferred more interior regions of the brain.
Whole-Brain-Scale Mapping
Microscope images showing the two glowing synapse proteins, PSD-95 and SAP102, across brain sections.
When they looked closely, they found that the two glowing proteins represented different sets of synapses, the author explained. Each region of the brain has a characteristic “synaptome signature.” Like fingerprints that differ in shape and size, various brain regions also seemed to contain synapses that differ in their protein composition, size, and number.
Using a machine learning algorithm developed in-house, the team categorized the synapses into 37 subtypes. Remarkably, regions of the brain related to higher reasoning and thinking abilities also contained the most diverse synapse population, whereas “reptile brain regions” such as the brain stem were more uniform in synapse sub-type.
Synaptome dominant subtype maps
A graph of a brain cross-section showing some of the most commonly found synapse subtypes in each area. Each color represents a different synapse subtype. “Box 4” highlights the hippocampus.

Why?

To see whether synapse diversity helps with information processing, the team used computer simulations to see how synapses would respond to common electrical patterns within the hippocampus—the seahorse-shaped region crucial for learning and memory. The hippocampus was one of the regions that showed remarkable diversity in synapse subtypes, with each spread out in striking patterns throughout the brain structure.
Remarkably, each type of electrical information processing translated to a unique synaptome map—change the input, change the synaptome.
It suggests that the brain can process multiple electrical information using the same brain region, because different synaptomes are recruited.
The team found similar results when they used electrical patterns recorded from mice trying to choose between three options for a reward. Different synaptomes lit up when the choice was correct versus wrong. Like a map into internal thoughts, synaptomes drew a vivid picture of what the mouse was thinking when it made its choice.
Synaptome map function behavior and physiology
Each behavior activates a particular synaptome. Each synaptome is like a unique fingerprint of a thought process.

Synaptome Reprogramming

Like computer code, a synaptome seems to underlie a computational output—a decision or thought. So what if the code is screwed up?
Psychiatric diseases often have genetic causes that impact proteins in the synapse. Using mice that show symptoms similar to schizophrenia or autism, the team mapped their synaptome—and found dramatic changes in how the brain’s various synapse sub-types are structured and connected.
For example, in response to certain normal brain electrical patterns, some synaptome maps only weakly emerged, whereas others became abnormally strong in the mutant mice.
Synaptome reprogramming
Mutations can change the synaptome and potentially lead to psychiatric disorders
It seems like certain psychiatric diseases “reprogram” the synaptome, the authors concluded. Stronger or new synaptome maps could, in fact, be why patients with schizophrenia experience delusions and hallucinations.

So are you your synaptome?

Perhaps. The essence of you—memories, thought patterns—seems to be etched into how diverse synapses activate in response to input. Like a fingerprint for memories and decisions, synaptomes can then be “read” to decipher that thought.
But as the authors acknowledge, the study’s only the beginning. Along with the paper, the team launched a Synaptome Explorer tool to help neuroscientists further parse the intricate connections between synapses and you.
“This map opens a wealth of new avenues of research that should transform our understanding of behavior and brain disease,” said Grant.
Images Credit: Derivatives of Fei Zhu et al. / University of Edinburg / CC BY 4.0
776
Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she's also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

FOLLOW SHELLY:

   

From the DAVID SUZUKI FOUNDATION. Positive possibilities!

Climate and biodiversity solutions offer endless positive possibilities There’s no real reason for the climate and biodiversity crises to ha...