Plausible Neural Networks for Biological Modelling

Plausible Neural Networks for Biological Modelling
Author: H.A. Mastebroek
Publisher: Springer Science & Business Media
Total Pages: 264
Release: 2012-12-06
Genre: Computers
ISBN: 9401006741

The expression 'Neural Networks' refers traditionally to a class of mathematical algorithms that obtain their proper performance while they 'learn' from examples or from experience. As a consequence, they are suitable for performing straightforward and relatively simple tasks like classification, pattern recognition and prediction, as well as more sophisticated tasks like the processing of temporal sequences and the context dependent processing of complex problems. Also, a wide variety of control tasks can be executed by them, and the suggestion is relatively obvious that neural networks perform adequately in such cases because they are thought to mimic the biological nervous system which is also devoted to such tasks. As we shall see, this suggestion is false but does not do any harm as long as it is only the final performance of the algorithm which counts. Neural networks are also used in the modelling of the functioning of (sub systems in) the biological nervous system. It will be clear that in such cases it is certainly not irrelevant how similar their algorithm is to what is precisely going on in the nervous system. Standard artificial neural networks are constructed from 'units' (roughly similar to neurons) that transmit their 'activity' (similar to membrane potentials or to mean firing rates) to other units via 'weight factors' (similar to synaptic coupling efficacies).

Plausible Neural Networks for Biological Modelling

Plausible Neural Networks for Biological Modelling
Author: H.A. Mastebroek
Publisher: Springer Science & Business Media
Total Pages: 276
Release: 2001-09-30
Genre: Computers
ISBN: 9780792371922

This book has the unique intention of returning the mathematical tools of neural networks to the biological realm of the nervous system, where they originated a few decades ago. It aims to introduce, in a didactic manner, two relatively recent developments in neural network methodology, namely recurrence in the architecture and the use of spiking or integrate-and-fire neurons. In addition, the neuro-anatomical processes of synapse modification during development, training, and memory formation are discussed as realistic bases for weight-adjustment in neural networks. While neural networks have many applications outside biology, where it is irrelevant precisely which architecture and which algorithms are used, it is essential that there is a close relationship between the network's properties and whatever is the case in a neuro-biological phenomenon that is being modelled or simulated in terms of a neural network. A recurrent architecture, the use of spiking neurons and appropriate weight update rules contribute to the plausibility of a neural network in such a case. Therefore, in the first half of this book the foundations are laid for the application of neural networks as models for the various biological phenomena that are treated in the second half of this book. These include various neural network models of sensory and motor control tasks that implement one or several of the requirements for biological plausibility.

Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence

Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence
Author: Nikola K. Kasabov
Publisher: Springer
Total Pages: 742
Release: 2018-08-29
Genre: Technology & Engineering
ISBN: 3662577151

Spiking neural networks (SNN) are biologically inspired computational models that represent and process information internally as trains of spikes. This monograph book presents the classical theory and applications of SNN, including original author’s contribution to the area. The book introduces for the first time not only deep learning and deep knowledge representation in the human brain and in brain-inspired SNN, but takes that further to develop new types of AI systems, called in the book brain-inspired AI (BI-AI). BI-AI systems are illustrated on: cognitive brain data, including EEG, fMRI and DTI; audio-visual data; brain-computer interfaces; personalized modelling in bio-neuroinformatics; multisensory streaming data modelling in finance, environment and ecology; data compression; neuromorphic hardware implementation. Future directions, such as the integration of multiple modalities, such as quantum-, molecular- and brain information processing, is presented in the last chapter. The book is a research book for postgraduate students, researchers and practitioners across wider areas, including computer and information sciences, engineering, applied mathematics, bio- and neurosciences.

The Handbook of Brain Theory and Neural Networks

The Handbook of Brain Theory and Neural Networks
Author: Michael A. Arbib
Publisher: MIT Press
Total Pages: 1328
Release: 2003
Genre: Neural circuitry
ISBN: 0262011972

This second edition presents the enormous progress made in recent years in the many subfields related to the two great questions : how does the brain work? and, How can we build intelligent machines? This second edition greatly increases the coverage of models of fundamental neurobiology, cognitive neuroscience, and neural network approaches to language. (Midwest).

Handbook of Natural Computing

Handbook of Natural Computing
Author: Grzegorz Rozenberg
Publisher: Springer
Total Pages: 2052
Release: 2012-07-09
Genre: Computers
ISBN: 9783540929093

Natural Computing is the field of research that investigates both human-designed computing inspired by nature and computing taking place in nature, i.e., it investigates models and computational techniques inspired by nature and also it investigates phenomena taking place in nature in terms of information processing. Examples of the first strand of research covered by the handbook include neural computation inspired by the functioning of the brain; evolutionary computation inspired by Darwinian evolution of species; cellular automata inspired by intercellular communication; swarm intelligence inspired by the behavior of groups of organisms; artificial immune systems inspired by the natural immune system; artificial life systems inspired by the properties of natural life in general; membrane computing inspired by the compartmentalized ways in which cells process information; and amorphous computing inspired by morphogenesis. Other examples of natural-computing paradigms are molecular computing and quantum computing, where the goal is to replace traditional electronic hardware, e.g., by bioware in molecular computing. In molecular computing, data are encoded as biomolecules and then molecular biology tools are used to transform the data, thus performing computations. In quantum computing, one exploits quantum-mechanical phenomena to perform computations and secure communications more efficiently than classical physics and, hence, traditional hardware allows. The second strand of research covered by the handbook, computation taking place in nature, is represented by investigations into, among others, the computational nature of self-assembly, which lies at the core of nanoscience, the computational nature of developmental processes, the computational nature of biochemical reactions, the computational nature of bacterial communication, the computational nature of brain processes, and the systems biology approach to bionetworks where cellular processes are treated in terms of communication and interaction, and, hence, in terms of computation. We are now witnessing exciting interaction between computer science and the natural sciences. While the natural sciences are rapidly absorbing notions, techniques and methodologies intrinsic to information processing, computer science is adapting and extending its traditional notion of computation, and computational techniques, to account for computation taking place in nature around us. Natural Computing is an important catalyst for this two-way interaction, and this handbook is a major record of this important development.

Artificial Neural Networks in Pattern Recognition

Artificial Neural Networks in Pattern Recognition
Author: Nadia Mana
Publisher: Springer
Total Pages: 253
Release: 2012-09-11
Genre: Computers
ISBN: 3642332129

This book constitutes the refereed proceedings of the 5th INNS IAPR TC3 GIRPR International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2012, held in Trento, Italy, in September 2012. The 21 revised full papers presented were carefully reviewed and selected for inclusion in this volume. They cover a large range of topics in the field of neural network- and machine learning-based pattern recognition presenting and discussing the latest research, results, and ideas in these areas.

Neural Information Processing

Neural Information Processing
Author: Minho Lee
Publisher: Springer
Total Pages: 655
Release: 2013-10-29
Genre: Computers
ISBN: 3642420516

The three volume set LNCS 8226, LNCS 8227, and LNCS 8228 constitutes the proceedings of the 20th International Conference on Neural Information Processing, ICONIP 2013, held in Daegu, Korea, in November 2013. The 180 full and 75 poster papers presented together with 4 extended abstracts were carefully reviewed and selected from numerous submissions. These papers cover all major topics of theoretical research, empirical study and applications of neural information processing research. The specific topics covered are as follows: cognitive science and artificial intelligence; learning theory, algorithms and architectures; computational neuroscience and brain imaging; vision, speech and signal processing; control, robotics and hardware technologies and novel approaches and applications.

Fundamentals of Neural Network Modeling

Fundamentals of Neural Network Modeling
Author: Randolph W. Parks
Publisher: MIT Press
Total Pages: 450
Release: 1998
Genre: Computers
ISBN: 9780262161756

Provides an introduction to the neural network modeling of complex cognitive and neuropsychological processes. Over the past few years, computer modeling has become more prevalent in the clinical sciences as an alternative to traditional symbol-processing models. This book provides an introduction to the neural network modeling of complex cognitive and neuropsychological processes. It is intended to make the neural network approach accessible to practicing neuropsychologists, psychologists, neurologists, and psychiatrists. It will also be a useful resource for computer scientists, mathematicians, and interdisciplinary cognitive neuroscientists. The editors (in their introduction) and contributors explain the basic concepts behind modeling and avoid the use of high-level mathematics. The book is divided into four parts. Part I provides an extensive but basic overview of neural network modeling, including its history, present, and future trends. It also includes chapters on attention, memory, and primate studies. Part II discusses neural network models of behavioral states such as alcohol dependence, learned helplessness, depression, and waking and sleeping. Part III presents neural network models of neuropsychological tests such as the Wisconsin Card Sorting Task, the Tower of Hanoi, and the Stroop Test. Finally, part IV describes the application of neural network models to dementia: models of acetycholine and memory, verbal fluency, Parkinsons disease, and Alzheimer's disease. Contributors J. Wesson Ashford, Rajendra D. Badgaiyan, Jean P. Banquet, Yves Burnod, Nelson Butters, John Cardoso, Agnes S. Chan, Jean-Pierre Changeux, Kerry L. Coburn, Jonathan D. Cohen, Laurent Cohen, Jose L. Contreras-Vidal, Antonio R. Damasio, Hanna Damasio, Stanislas Dehaene, Martha J. Farah, Joaquin M. Fuster, Philippe Gaussier, Angelika Gissler, Dylan G. Harwood, Michael E. Hasselmo, J, Allan Hobson, Sam Leven, Daniel S. Levine, Debra L. Long, Roderick K. Mahurin, Raymond L. Ownby, Randolph W. Parks, Michael I. Posner, David P. Salmon, David Servan-Schreiber, Chantal E. Stern, Jeffrey P. Sutton, Lynette J. Tippett, Daniel Tranel, Bradley Wyble

Advances in Machine Learning II

Advances in Machine Learning II
Author: Jacek Koronacki
Publisher: Springer
Total Pages: 530
Release: 2009-11-27
Genre: Computers
ISBN: 3642051790

Professor Richard S. Michalski passed away on September 20, 2007. Once we learned about his untimely death we immediately realized that we would no longer have with us a truly exceptional scholar and researcher who for several decades had been inf- encing the work of numerous scientists all over the world - not only in his area of exp- tise, notably machine learning, but also in the broadly understood areas of data analysis, data mining, knowledge discovery and many others. In fact, his influence was even much broader due to his creative vision, integrity, scientific excellence and excepti- ally wide intellectual horizons which extended to history, political science and arts. Professor Michalski’s death was a particularly deep loss to the whole Polish sci- tific community and the Polish Academy of Sciences in particular. After graduation, he began his research career at the Institute of Automatic Control, Polish Academy of Science in Warsaw. In 1970 he left his native country and hold various prestigious positions at top US universities. His research gained impetus and he soon established himself as a world authority in his areas of interest – notably, he was widely cons- ered a father of machine learning.

Advances in Neuro-Information Processing

Advances in Neuro-Information Processing
Author: Mario Köppen
Publisher: Springer Science & Business Media
Total Pages: 1273
Release: 2009-07-10
Genre: Computers
ISBN: 3642024890

The two volume set LNCS 5506 and LNCS 5507 constitutes the thoroughly refereed post-conference proceedings of the 15th International Conference on Neural Information Processing, ICONIP 2008, held in Auckland, New Zealand, in November 2008. The 260 revised full papers presented were carefully reviewed and selected from numerous ordinary paper submissions and 15 special organized sessions. 116 papers are published in the first volume and 112 in the second volume. The contributions deal with topics in the areas of data mining methods for cybersecurity, computational models and their applications to machine learning and pattern recognition, lifelong incremental learning for intelligent systems, application of intelligent methods in ecological informatics, pattern recognition from real-world information by svm and other sophisticated techniques, dynamics of neural networks, recent advances in brain-inspired technologies for robotics, neural information processing in cooperative multi-robot systems.