The evolution of neural plasticity in digital organisms
Learning is a phenomenon that organisms throughout nature demonstrate and that machinelearning aims to replicate. In nature, it is neural plasticity that allows an organismto integrate the outcomes of their past experiences into their selection of future actions.While neurobiology has identified some of the mechanisms used in this integration, how theprocess works is still a relatively unclear and highly researched topic in the cognitive sciencefield. Meanwhile in the field of machine learning, researchers aim to create algorithms thatare also able to learn from past experiences; this endeavor is complicated by the lack ofunderstanding how this process takes place within natural organisms.In this dissertation, I extend the Markov Brain framework [1, 2] which consists of evolvablenetworks of probabilistic and deterministic logic gates to include a novel gate type{feedback gates. Feedback gates use internally generated feedback to learn how to navigatea complex task by learning in the same manner a natural organism would. The evolutionarypath the Markov Brains take to develop this ability provides insight into the evolutionof learning. I show that the feedback gates allow Markov Brains to evolve the ability tolearn how to navigate environments by relying solely on their experiences. In fact, the probabilisticlogic tables of these gates adapt to the point where the an input almost alwaysresults in a single output, to the point of almost being deterministic. Further, I show thatthe mechanism the gates use to adapt their probability table is robust enough to allow theagents to successfully complete the task in novel environments. This ability to generalizeto the environment means that the Markov Brains with feedback gates that emerge fromevolution are learning autonomously; that is without external feedback. In the context ofmachine learning, this allows algorithms to be trained based solely on how they interact withthe environment. Once a Markov Brain can generalize, it is able adapt to changing sets of stimuli, i.e. reversal learn. Machines that are able to reversal learn are no longer limited tosolving a single task. Lastly, I show that the neuro-correlate is increased through neuralplasticity using Markov Brains augmented with feedback gates. The measurement of isbased on Information Integration Theory[3, 4] and quanties the agent's ability to integrateinformation.
Read
- In Collections
-
Electronic Theses & Dissertations
- Copyright Status
- Attribution 4.0 International
- Material Type
-
Theses
- Authors
-
Sheneman, Leigh
- Thesis Advisors
-
Hintze, Arend
- Committee Members
-
Adami, Christoph
Dyer, Fred
Ofria, Charles
- Date Published
-
2017
- Program of Study
-
Computer Science - Doctor of Philosophy
- Degree Level
-
Doctoral
- Language
-
English
- Pages
- xiv, 105 pages
- ISBN
-
9780355539134
0355539136
- Permalink
- https://doi.org/doi:10.25335/yd1t-xy93