Blog / News

Evolving neural networks for online-learning and The Goals of AI

In this post I'm going to talk a bit about where I think the sub-sub-field of evolving adaptive neural networks for online learning is up to in the context of the Big Picture.

New publication, AAAI symposium debrief, new informal advisor, favourable mention

<ramble>Well, how time passes, November 2013 came and went, and with it the AAAI 2013 Fall Symposium. I presented a paper titled Models of Brains: What Should We Borrow From Biology? I finally met in person a lot of people whose research I follow and who I've bounced ideas off over various electronic communication methods over the last few years. It really was great to have lunches and dinners with these people, many interesting conversations were had (interesting for me at least ;)). There were some thought provoking presentations (my favourite being by Gary Marcus, who offered a very interesting perspective on current AI research from a cognitive science angle), and I went away with new inspiration.

AAAI Fall Symposium 2013

I'll be presenting a paper at the AAAI 2013 Fall Symposium on November 15–17 in Virginia, USA. The topic of the symposium is How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or — ? I'm excited about meeting a lot of the people who wrote a lot of the material I reviewed in my last paper (Evolving Plastic Neural Networks for Online Learning: Review and Future Directions)! The title of my paper is Models of Brains: What Should We Borrow From Biology? Abstract:

Look, it's a gallery!

Every now and then the code I write to try and visualise what's happening in an experiment makes pretty images. I've added a gallery section so that you may appreciate the aesthetically pleasing results, too. Enjoy!

Open Science - What About the Source Code?

There's been a lot of talk about open science lately, the idea that all research results and/or data should be made freely available, in particular research funded by the public (seems pretty obvious, really). Less talked about in the context of open science is source code (however, the idea of open source software is plenty talked about it in certain circles, and most scientists will have at least come across open source software even if they don't use it on a daily basis). While this may not be so important to the general public, it's of great interest to other scientists and, increasingly, to the practice of science itself.

Review paper to be presented at AJCAI 2012

My first paper on this work, Evolving Plastic Neural Networks for Online Learning: Review and Future Directions, will be presented at the 25th Australian Joint Conference on Artificial Intelligence, 4-7 December 2012 (apparently it's "the premier event for Artificial Intelligence researchers in Australasia and one of the major international forums on AI worldwide," woo! From the abstract:

Fun with spike timing dependent synaptic plasticity

I've been looking at computational models of synapstic plasticity for spiking neuron models to incorporate into a grand unified neural network model (more on this later; don't take the "grand unified" too seriously...). My two favourites so far are the ones described in "Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity" and "Calcium-Based Plasticity Model Explains Sensitivity of Synaptic Changes to Spike Pattern, Rate, and Dendritic Location". However there were some things that just weren't entirely clear from reading these articles (at least for me), and I wanted to have a good feel for how they behaved.

Are positive results on one or two tasks significant?

My supervisor, Alan Blair, and I were talking the other day about problem domains and tasks used for assessing the performance and capability of machine learning and artificial intelligence methods/approaches/algorithms/models (let's just refer to these collectively as MAAMs, shall we). Traditionally, when someone introduces a new MAAM (e.g. a new neural network model, neural network encoding scheme, evolutionary algorithm, etcetera) they will report the results of experiments which test the new MAAM on one or two tasks. (Also typically only positive results are reported, but that's another topic.) Sometimes the new MAAM is designed for some specific task, so performance for other tasks is irrelevant. But often we do care about how well a new MAAM will perform on many kinds of tasks.

Subscribe to Front page feed