The School of Engineering, Computer Science and Mathematics website has been designed in line with modern Internet technologies (XHTML, CSS, DOM) but your browser does not appear to properly support some of the required features.

So that you are not at a disadvantage, you will still see all of the information content but in a less organised form. However, you will have a better browsing experience if you upgrade to a modern browser.

School HomeResearchPast Seminar informationComputer Science

Past Computer Science Seminars by Reverse Date


Computer Science Seminars (Historical)

Spatio-temporal GIS
Prof. Christophe Claramunt (Naval Academy Research Institute, Brest, France)
4 Jul 2011Harrison 170 Monday 3pmComputer Science


Decentralized spatial computing for geosensor networks, especially in movement analysis
Dr Patrick Laube (Department of Geography, University of Zurich)
4 Jul 2011Harrison 170 Monday 4pmComputer Science


Latent Force Models
Prof. Neil Lawrence (Department of Computer Science, University of Sheffield)
16 Mar 2011Harrison 215 Wednesday 2pmComputer Science
Physics based approaches to data modeling involve constructing anaccurate mechanistic model of data, often based on differentialequations. Machine learning approaches are typically datadriven-perhaps through regularized function approximation.These two approaches to data modeling are often seen as polaropposites, but in reality they are two different ends to a spectrum ofapproaches we might take.In this talk we introduce latent force models. Latent force models area new approach to data representation that model data through unknownforcing functions that drive differential equation models. Bytreating the unknown forcing functions with Gaussian process priors wecan create probabilistic models that exhibit particular physicalcharacteristics of interest, for example, in dynamical systemsresonance and inertia. This allows us to perform a synthesis of thedata driven and physical modeling paradigms. We will show applications of these models in systems biology and (given time) modelling of human motion capture data.


Differential Geometric MCMC Methods
Prof. Mark Girolami (Department of Statistical Science, University College London)
15 Mar 2011Harrison 209 Tuesday 3pmComputer Science
In recent years a reliance on MCMC methods has been developing as the “last resort” to perform inference over increasingly sophisticated statistical models used to describe complex phenomena. This presents a major challenge as issues surrounding correct and efficient MCMC-based statistical inference over such models are of growing importance. This talk will argue that differential geometry provides the tools required to develop MCMC sampling methods suitable for challenging statistical models. By defining appropriate Riemannian metric tensors and corresponding Levi-Civita manifold connections MCMC methods based on Langevin diffusions across the model manifold are developed. Furthermore proposal mechanisms which follow geodesic flows across the manifold will be presented. The optimality of these methods in terms of mixing time shall be discussed and the strengths (and weaknesses) of such methods will be experimentally assessed on a range of statistical models such as Log-Gaussian Cox Point Process models and Mixture Models. This talk is based on work that was presented as a Discussion Paper to the Royal Statistical Society and a dedicated website with Matlab codes is available at


Metric Learning with Eigenvalue Optimization
Dr. Yiming Ying (Department of Computer Science, University of Exeter)
1 Mar 2011Harrison 209 Tuesday 3pmComputer Science (Internal)
In this talk I will mainly present a novel eigenvalue optimization framework for learning a Mahalnobis metric from data. Within this context, we introduce a novel metric learning approach called DML-Eigen which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix. Moreover, we show that similar ideas can be extended to large margin nearest classifiers (LMNN) and maximum-margin matrix factorisation for collaborative filtering. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms scalable to large datasets are developed and their convergence analysis will be discussed in detail. At last we show the competitiveness of our methods by various experiments on benchmark datasets. In particular, we report an encouraging result on a challenging face verification dataset called Labeled Faces in the Wild (LFW).


Discrete Mereotopology in automated histological image analysis
Dr. David A. Randell (Medical Imaging Research Group, College of Medical and Dental Sciences, University of Birmingham)
22 Feb 2011Harrison 209 Tuesday 3pmComputer Science
This cross-disciplinary talk covers the integration of qualitative spatial reasoning (QSR) with quantitative histological image processing methods using digitised images of stained tissue sections and other preparations examined under the microscope. The talk will show how the QSR spatial logic Discrete Mereotopology can be used to model and exploit topological properties of segmented images of cells and their parts and general tissue architecture. Relation sets and other mathematical structures extracted from the theory are factored out and used to complement and guide algorithmic-based segmentation methods. The net result is a change of emphasis away from classical pixel-based segmentation algorithms to one where the primary ontological primitives are regions and their spatial relationships each to the other. The work forms part of that done by the Medical Imaging Group with a long-standing interest in: image segmentation in histopathology, quantitative measures of tissue architecture and complex data characterisation and visualisation.


Automating the Heuristic Design Process
Dr. Matthew Hyde (ASAP Group, University of Nottingham)
26 Jan 2011Harrison 170 Wednesday 2pmComputer Science
The current state of the art in the development of search methodologies is focused around the design of bespoke systems, which are specifically tailored to a particular situation or organisation. Such bespoke systems are necessarily created by human experts, and so they are relatively expensive. Some of our research at Nottingham is concerned with how to build intelligent systems which are capable of automatically building new systems. In other words to automate some of the creative process, to make it less expensive by being less reliant on human expertise. In this talk, I will present some work we have recently published on the automatic design of heuristics for two dimensional stock cutting problems. The research shows that genetic programming can be used to evolve novel heuristics which are at least as good as human designed heuristics for this problem. Research into the automatic design of heuristics could represent a change in the role of the human expert, from designing a heuristic methodology, to designing a search space within which a good heuristic methodology is likely to exist. The computer then takes on the more tedious task of searching that space, while we can focus on the creative aspect of designing it.


Many Objective Optimisation of Engineering Problems
Dr. Evan J. Hughes (Department of Informatics and Sensors, University of Cranfield)
19 Jan 2011Harrison 170 Wednesday 3pmComputer Science
Most real engineering problems are characterised by having many criteria that are to be optimised simultaneously. Unfortunately the criteria are often conflicting and so have to be considered as a many-objective optimisation process in order to derive a trade-off surface of the available optima solutions. Although a plethora of algorithms have been developed for optimising two-objective problems, many of them do not work well as the number of objectives increase. The talk introduces some of the new algorithms that have been developed for investigating many-objective problems and describes how the methods have been used to advance the design of airborne fire-control and surveillance radars.


Various Formulations for Learning the Kernel and Structured Sparsity
Prof. Massimilino Pontil (Department of Computer Science, UCL)
1 Dec 2010Harrison 170 Wednesday 2pmComputer Science
The problem of learning a Mercer kernel is of central importance in the context of kernel-based methods, such as support vector machines, regularized least squares and many more. In this talk, I will review an approach to learning the kernel, which consists in minimizing a convex objective function over a prescribed set of kernel matrices. I will establish some important properties of this problem and present a reformulation of it from a feature space perspective. A well studied example covered by this setting is multiple kernel learning, in which the set of kernels is the convex hull of a finite set of basic kernels. I will discuss extensions of this setting to more complex kernel families, which involve additional constraints and a continuous parametrization. Some of these examples are motivated by multi-task learning and structured sparsity, which I will describe in some detail during the talk.


Analysis, synthesis and applications of gene regulatory network models
Prof. Yaochu Jin (Department of Computing, University of Surrey)
10 Nov 2010Harrison 107 Wednesday 2pmComputer Science
This talk starts with a brief introduction to computational models of gene regulatory networks (GRN), followed by a description of our recent results on analyzing and synthesizing gene regulatory motifs, particularly from the robustness and evolvability perspective. We show that in a feedforward Boolean network, the trade-off between robustness and evolvability cannot be resolved. In contrast, how that this trade-off can be resolved in an ODE-based GRN model for cellular growth based on a quantitative evolvability measure. In addition, we demonstrate that robust GRN motifs can emerge from in silico evolution without an explicit selection pressure on robustness. Our results also suggest that evolvability is evolvable without explicit selection.


An Ontology of Information and Information Bearers.
Dr. Antony Galton (Computer Science)
3 Nov 2010Harrison 170 Wednesday 2pmComputer Science (Internal)
In many areas, such as emergency management, coordinated action can be hampered by lack of suitable informatic support for integrating diverse types of information, in different formats, from a variety of sources, all of which may be relevant to the problem at hand. To create software that is able to handle such a diversity of information types in a unified framework it is necessary to understand what types of information there are, what forms they can take, and how they are related to each other and to other entities of concern. To this end, I am currently developing a formal ontology of information entities to serve as a reference point for subsequent system development activities. In this talk I will discuss some of the issues that I have had to address in developing the ontology.


Novel Machine Learning Methods for Data Integration
Dr. Colin Campbell (Intelligent System Lab, University of Bristol)
27 Oct 2010Harrison 170 Wednesday 2pmComputer Science
Substantial quantities of data are being generated within the biomedical sciences and the successful integration of different types of data remains an important challenge. We begin the talk with an overview of our motivation for our investigations in this context. We begin by briefly reviewing work on the joint unsupervised modeling of several types of data believed functionally linked such as microRNA and gene expression array data from the same cancer patient. Next we consider supervised learning and outline several approaches to multi-kernel learning which can handle disparate types of input data. We conclude with a discussion of future avenues for investigation in this context.


Rubberband Algorithms - A General Strategy for Efficient Solutions of Euclidean Shortest Path Problems
Prof. Reinhard Klette (University of Auckland, New Zealand)
20 Oct 2010Harrison 170 Wednesday 2pmComputer Science
Algorithms for solving shortest path problems in 2D or 3D Euclidean space are typically either solvable in linear time, or of higher order time complexities, often even NP-hard. Rubberband algorithms follow a general design strategy which is relatively simple to implement assuming a step set has been identified which contains the vertices of shortest paths, and is easily calculable itself. The talk informs about solutions to selected shortest path problems using rubberband algorithms.


Kent's talk: A hyper-heuristic approach to generating mutation operators with tailored distributions.

Richard's talk: A Bayesian Framework for Active Learning
Kent McClymont and Richard Fredlund (Computer Science)
13 Oct 2010Harrison 107 Wednesday 2pmComputer Science (Internal)
Kent's talk: Discussion on a method for generating new probability distributions tailored to specific problem classes for use in optimisation mutation operators. A range of bespoke operators with varying behaviours are created by evolving multi-modal Gaussian mixture model distributions. These automatically constructed operators are found to match the performance of a single tuned Gaussian distribution when compared using a (1+1) Evolution Strategy. In this study, the generated heuristics are shown to display a range of desirable characteristics for the DTLZ and WFG test problems; such as speed of convergence.

Richard's talk: We describe a Bayesian framework for active learning for non-separable data, which incorporates a query density to explicitly model how new data is to be sampled. The model makes no assumption of independence between queried data-points; rather it updates model parameters on the basis of both observations and how those observations were sampled. A "hypothetical" look-ahead is employed to evaluate expected cost in the next time-step. We show the efficacy of this algorithm on the probabilistic high-low game which is a non-separable generalisation of the separable high-low game introduced by Seung et al. (1993). Our results indicate that the active Bayes algorithm performs significantly better than passive learning even when the overlap region is wide, covering over 30% of the feature space.


Professor Mario Alexandre Teles de Figueiredo
2 Jun 2005Harrison 254 Tuesday 3pmComputer Science


THIS SEMINAR HAS BEEN CANCELLED 'Automated feature detection and classification in Solar Feature Catalogues'.
Dr Valentina Zharkova
26 May 2005Harrison 254 Tuesday 3pmComputer Science
The searchable Solar Feature Catalogues (SFC) developed using automated pattern recognition techniques from digitized solar images are presented. The techniques were applied for detection of sunspots, active regions,filaments and line-on-sight magnetic neutral lines in the automatically standardized full disk solar images in Ca II K1, Ca II K3 and Ha taken at the Meudon Observatory and white light images and magnetograms from SOHO/MDI. The results of automated recognition were verified with the manual synoptic maps and available statistical data that revealed good detection accuracy. Based on the recognized parameters a structured database of the Solar Feature Catalogues was built on a mysql server for every feature and published with various pre-designed search pages on the Bradford University web site The SFCs with a coverage of 10 years (1996-2005) are to be used for the solar feature classification and activity forecast, the first classification attempts will be discussed.


Computation without representation, and other mysteries
Derek Partridge (Department of Computer Science)
13 May 2005Harrison 171 Friday 4pmComputer Science (Internal)
The talk will cover the necessity for software that is not, and cannot be, faultless, but is, nevertheless, optimal. In particular, the need for inductive software technologies and why it may be impossible to track down known bugs. Then we move on to computations that know when they're wrong (to cope with the inevitable erroneous outputs), and a final generalization into the philosophical notion of accurately approximate computation as an alternative to precisely correct/incorrect computation.


Using perceptual models to improve fidelity and provide invariance to valumetric scaling for quantization index modulation watermarking
Professor Ingemar Cox
5 May 2005Harrison 254 Tuesday 3pmComputer Science
Quanitization index modulation (QIM) is a computationally efficient method of watermarking with side information. This paper proposes two improvements to the original algorithm.
First, the fixed quantization step size is replaced with an adaptive step size that is determined using Watson's perceptual model. Experimental results on a database of 1000 images illustrate significant improvements in both fidelity and robustness to additive white Gaussian noise.
Second, modifying the Watson model such that it scales linearly with valumetric (amplitude) scaling, results in a QIM algorithm that is invariant to valumetric scaling. Experimental results compare this algorithm with both the original QIM and an adaptive QIM and demonstrate superior performance.


The Soft Machines: Computing with the Code of Life
Martyn Amos (Department of Computer Science)
18 Mar 2005Harrison 171 Friday 4pmComputer Science (Internal)
Cellular computing is a new, rapidly expanding field of research at the intersection of biology and computer science. It is becoming clear that, in the 21st century, biology will be characterized more often as an information science. The flood of data generated first by the various genome projects, and now by large-throughput gene expression, has led to increasingly fruitful collaborations between biologists, mathematicians and computer scientists. However, until recently, these collaborations have been largely one-way, with biologists taking mathematical expertise and applying it in their own domain. With the onset of molecular and cellular computing, though, flow of information been growing in the reverse direction. Computer scientists are now taking biological systems and modifying them to obtain computing devices. Cells are being re-engineered to act as microbial processors, smart sensors, drug delivery systems and many other classes of device. This talk traces the brief history of cellular computing and suggests where its future may lie.


How computer science can reveal problems with the standard model of cancer and can identify new models of cancer
Ajit Narayanan (Department of Computer Science)
4 Mar 2005Harrison 171 Friday 4pmComputer Science (Internal)
The dominant paradigm of cancer development is that mutations to a small number of genes transform a healthy cell into a cancerous cell by blocking normal pathways or making other pathways hyperactive. Specific molecular pathways ('subway lines') are claimed to be responsible for programming these behaviours. My work on cancer gene expression data does not support this view.

While my own research aim is also to come up with networks and maps of cancer, what distinguishes my approach from the dominant paradigm and its alternatives is that, in my approach, a specific pathway may be normal or cancerous depending on the expression values of genes making up the pathway. We do not need to assume that a cancer pathway is like a tube-train taking a different and unscheduled route from normal. Instead we can assume that the same route may be used in both normal and cancer cells. What differs is the number of trains passing through each station. It may not even matter that some of these trains are not fully formed trains (i.e. are mutated). What matters is the volume of traffic along the route: a route that is normal one day can become cancerous another day if the volume of traffic along the route changes significantly. If this is right, this will lead to a new view of cancer development and progression that has immediate and very different implications for possible therapeutic intervention and prevention.

No knowledge of biology is assumed. The talk will introduce the dominant paradigm of cancer and present some of our work on the alternative, leaving room for discussion and speculation.


Towards an Evolutionary Computation Approach to the Origins of Music
Dr Eduardo Reck Miranda
3 Mar 2005Harrison 254 Tuesday 3pmComputer Science
Evolutionary Computation (EC) may have varied applications in Music. This paper introduces three approaches to using EC in Music (namely, engineering,creative and musicological approaches) and discusses examples of representative systems that have been developed within the last decade, with emphasis on more recent and innovative works. We begin by reviewing engineering applications of EC in Music Technology such as Genetic Algorithms and Cellular Automata sound synthesis, followed by an introduction to applications where EC has been used to generate musical compositions. Next, we introduce ongoing research into EC models to study the origins and evolution of music and detail our own research work on modelling the evolution of musical systems in virtual worlds.


Bayesian Averaging over Decision Trees
Vitaly Schetinin (Department of Computer Science)
25 Feb 2005Harrison 171 Friday 4pmComputer Science (Internal)
Bayesian averaging (BA) over classification models allows analysts to estimate the uncertainty of classification outcomes within prior knowledge. By averaging over ensemble of classification models, the class posterior distribution, e.g. its shape and parameters, can be estimated and used by analysts to judge on the confident intervals. However the standard Bayesian methodology has to average all possible classification models that makes this methodology computationally infeasible for real-world applications.

The feasible way of implementing the BA is the use Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior distribution. Within the MCMC technique the parameters of classification model are drawn from the given priors. The proposed models are accepted or rejected accordingly to a Bayes rule. When the class posterior distribution becomes stable, the classification models are collected and their classification outcomes are averaged.

Regarding to Decision Tree (DT) classification models which provide a trade-off between the classification accuracy and interpretability, there are three questions which still remain open. The first is the condition under which the Markov Chain can make reversible moves and guarantee that the MCMC can explore DTs with different parameters within the given priors. The second question is how to avoid local minima during the MCMC search. The final third question is how to select a single DT from the DT ensemble which could be used for interpretation. All these three problems will be discussed and the experimental results obtained on some real world data (e. g. the UCI Machine Learning Repository, StatLog, the Trauma data etc) will be presented.


A Dominance-Based Mapping from Multi-Objective Space for Single Objective Optimisers
Kevin Smith (Department of Computer Science)
18 Feb 2005Harrison 171 Friday 4pmComputer Science (Internal)
Traditional optimisation research has concentrated on the single-objective case, where one measure of the quality of the system is optimised exclusively. Most real-world problems, however, are constructed from multiple, often competing, objectives which prevent the use of single-objective optimisers. Here a generic mapping to a single objective function is proposed for multi-objective problems, allowing single-objective optimisers to be used for the optimisation of multi-objective problems. A simulated annealer using this method is proposed which uses this technique and is shown to perform well on both test and commercial problems.


Identifying Familiarity and Dialogue Type in Transcribed Texts
Andrew Lee (Department of Computer Science)
11 Feb 2005Harrison 171 Friday 4pmComputer Science (Internal)
In a spoken dialogue, there is a lot of information that is not explicitly stated but can be identified through non-linguistic features, such as tone of voice or a change in speaker. However, this information is not always available when the conversation is transcribed into a written text.

In this talk, I'll be describing methods for measuring two aspects of dialogues that can be lost when transcribed: the familiarity between participants and the type of dialogue.

In the case of familiarity, Dialogue Moves are counted for conversational transcripts from the Map Task corpus. The differences in Dialogue Move pair distributions are compared between transcripts where participants are either familiar or unfamiliar with each other to explore whether a measure of familiarity can be based on this approach.

To identify the type of dialogue, the frequency distribution of Verbal Response Modes in a transcribed text are counted for number of different dialogues, including interviews, presentations and speeches. Profiles generated from the frequency distributions are then be used as a basis for comparison to identify the closest matching dialogue type.


ROC Optimisation of Safety Related Systems
Jonathan Fieldsend (Department of Computer Science)
4 Feb 2005Harrison 171 Friday 4pmComputer Science (Internal)
In this talk the tuning of critical systems is cast as a multi-objective optimisation problem. It is shown how a region of the optimal receiver operating characteristic (ROC) curve may be obtained, permitting the system operators to select the operating point. This methodology is applied to the STCA system, showing that the current hand-tuned operating point can be improved upon, as well as providing the salient ROC curve describing the true-positive versus false-positive trade-off. In addition, through bootstrapping the data we can also look at the effect of data uncertainty on individuals parameterisations, and the ROC curve as a whole.


Ultrasound Image Segmentation
Professor Alison Noble (University of Oxford)
3 Feb 2005Harrison 254 Tuesday 3pmComputer Science
Ultrasound image segmentation is considered a challenging area of medical image analysis as clinical images vary in quality and image formation is nonlinear. Most classicial approaches to segmentation do not work well on this data.
In this talk, I will provide an overview of research we have done in this area, using what I call weak physics based approaches, specifically looking at the tasks of displacement estimation, and segmentation of tissue regions and perfusion uptake.


Adopting Open Source tools in a production environment: are we locked in?
Brian Lings (Department of Computer Science)
21 Jan 2005Harrison 171 Friday 4pmComputer Science (Internal)
Many companies are using model-based techniques to offer a competitive advantage in an increasingly globalised systems development industry. Central to model-based development is the concept of models as the basis from which systems are generated, tested and maintained. The availability of high-quality tools, and the ability to adopt and adapt them to the company practice, are therefore important qualities. Model interchange between tools becomes a major issue. Without it, there is significantly reduced flexibility, and a danger of tool lock-in. In this talk I report on a case study in which a systems development company, SAAB Combitech, has explored the possibility of complementing their current proprietary tools by exporting models in XMI to open source products for supporting their model-based development activities. We found that problems still exist with interchange, and that the technology needs to mature before industrial-strength model interchange becomes a reality.


Dr Sunil Vadera
20 Jan 2005Harrison 209 Tuesday 3pmComputer Science

Much of our daily reasoning appears to be based on stereotypes, exemplars and anecdotes. Yet, basic statistics informs us that decisions based only limited data are, at best, likely to be inaccurate, if not badly wrong. However, exemplars and stereotypes are not arbitrary data points, they are based on experience and represent prototypical situations. The ability to predict the behaviour of a consumer, observe that two people are related, diagnose an illness, and even how an MP might vote on a particular issue, all depend on a person's past experience - that is the exemplars and stereotypes a person learns.

If this hypothesis, namely that we can form and reason well with exemplars is true, we should be able to identify exemplars from data. To achieve this, we need to answer the following questions: (a) What is an exemplar and how can it be represented? (b) How do we learn good exemplars incrementally? (c) How can exemplars be used?

This seminar presents an approach to these questions that involves the use of the notion of family resemblance to learn exemplars and Bayesian networks to represent and utilise exemplars. Empirical results of applying the model will be presented and relationships with other models of machine learning also discussed.


MODELLING THROUGH: The growing role of decision analytic models in Health Technology Assessment
Martin Pitt (Department of Computer Science)
14 Jan 2005Harrison 171 Friday 4pmComputer Science (Internal)
In recent years, analytic modelling has been widely used to support decision making within the National Health Service (NHS) for a range of applications and with varying levels of success.

A recent development has been the adoption of mathematical and computer based models as a central element if the process of Health Technology Assessment (HTA). HTA is concerned with the evaluation of alternative healthcare interventions (eg alternative drug therapies) in order to directly inform decision making. It is now at the forefront of health research in the UK representing the largest single strand of NHS funded research activity. HTA outputs form a key element in the process by which the National Institute for Clinical Excellent determine general guidelines for UK prescription and clinical practice.

This presentation will take an informal look at the rapidly developing field of decision analytic modelling in field HTA. It will outline the range of alternative mathematical and computer based approaches adopted with reference to case study examples, and illustrate how model outputs feed into the decision making process. Key challenges within this field, such as how data uncertainty is handled, will be examined in relation to current areas of active development.


***Jonas Gamalielsson - Developing a Method for Assessing the Interestingness of Rules Induced from Microarray Gene Expression Data ***Zelmina Lubovac - Revealing modular organization of the protein interactome by combining its topological and functional properties ***Simon Lundell - Modelling the Heamatopoietic Stem Cell System.
Jonas Gamalielsson, Simon Lundell and Zelmina Lubovac
17 Dec 2004Harrison 209 Wednesday 3pmComputer Science
***Jonas Gamalielsson - Abstract :The aim and contribution of this work is to develop a method for assessing the interestingness of rules induced from microarray gene expression data using a combination of objective and domain specific measures, which will assist biologists in finding the most interesting hypotheses to test after mining for rules in microarray gene expression data and also generate more accurate models than if objective measures alone were used. More specifically, a method is being developed for assessing the biological plausibility of hypothetic regulatory relations generated by data mining algorithms applied to gene expression data. The idea is to use an information fusion approach where knowledge is used about the Gene Ontology functional classification of gene products and the topology of known regulatory pathways with the purpose of generating templates representing general knowledge of regulation in pathways. Templates show what kind of gene products with respect to molecular function that have been found to participate in different types of regulatory relations in pathways. A training set of regulatory relations is used to derive the templates. A test set of hypothetic regulatory relations is used to assess how well the templates can distinguish between hypothetic relations showing a high and low level of biological plausibility with respect to the set of training relations. ***Zelmina Lubovac- Abstract :Understanding the structure of the protein interaction network is useful as a first step towards revealing the underlying principles of the large-scale organisation of the cell. In this project, we analyse a topological characterisation of the yeast (/Saccharomyces cerevisiea/) protein-protein interaction network, and relate it to functional annotations from Gene Ontology (GO). We aim to develop a biologically informed measure to reveal modular formations in an interactome. A semantic similarity measure has been used to assess the role of hubs in a network in terms of functional annotation from GO. The existing graph theoretic notion of the module has been used in previous work to perform a modular decomposition of the protein network, i.e. to break down sub-graphs into a hierarchy of nested modules or units that groups proteins with common functional roles. Our attempt is to complement the existing graph theoretic approach with semantic similarity based on proteins? ontological terms to achieve more biologically plausible descriptions of modular decomposition. ***Simon Lundell -Abstract :During life of higher animals the heamatopoietic system produces blood, with a composition of various cell types, e.g. lymphocytes, erythrocytes and platelets. This system has to adopt to the animals growth and for changes such as stress conditions, e.g. infection and blood loss. If the animal is infected then a new composition of blood cells is needed as well as a compensation for the cells lost in the battle against the intruder. The heamatopoietic system is a highly adaptable and has stem cells that are dormant in long periods of time. The dormant cells can,when needed, be proliferated, and may then give rise to millions of new cells. The regulation of the number of heamatopoietic cells is crucial to animals; the heamatopoietic system must avoid depletion of stem cells as well as excess production of cells, both instances are life threatening conditions. A mouse produces 60% of its body weight in blood cells in its lifetime and a human produces 10 times its body weight. Although the large production of blood cells only a few stem cells has been stated as necessary to recreate hematopoietic system. Using an object based model of this system we were able to reproduce experimental data, and were able to find out which types of feedback regulations are active during stem cell transplantation. The systems intricate organisation, dynamic structure and the need to compare the simulations to a large number of experimental setups, opts for a new way of organising these models.


Wiggly Outlines: The Ins and Outs of Non-convex Hulls
Antony Galton (Department of Computer Science)
17 Dec 2004Harrison 171 Friday 4pmComputer Science (Internal)
Given a set of objects located in space, how can we represent the region occupied by those objects at different granularities? At a very fine granularity, the region simply consists of all the points occupied by the objects themselves, whereas at a very coarse granularity it might be given by the convex hull of those points. In many cases, neither of these representations is very useful: what we want is some kind of non-convex hull to convey the overall 'shape' of the region. But whereas the convex hull of a set of points is uniquely defined, there are any number of candidates for their non-convex hull. In this talk I shall introduce and explore some of the properties of a family of non-convex hulls generated by a generalisation of a simple convex-hull algorithm.


A personal view of computer assisted formal reasoning
Matthew Fairtlough (University of Sheffield)
10 Dec 2004Harrison 171 Friday 4pmComputer Science (Internal)
I will survey my work of the last 13 years in the field of logic and formal reasoning; in particular I will discuss the methods, logics, implementations, tools and applications involved. The most striking development has been a new modal logic called Lax Logic, so my talk will focus on this. I will give a flavour of the logic itself and of its application to formal reasoning across (multiple) abstraction boundaries, where inevitably constraints arise whose values may not be precisely known in advance. I will end by presenting some recent work using Mathematica to analyse and animate a dynamic 3-dimensional lattice.


Seeing where the other guy is coming from: A survival guide for young researchers working in multi-disciplinary subjects.
James Hood (Department of Computer Science)
3 Dec 2004Harrison 171 Friday 4pmComputer Science (Internal)
Part of the confidence I have gained recently in my own work is the result of trying to see where different disciplinary groups in my general field of research are coming from -- what they are trying to gain in doing what they are doing. This action, which has led to a better understanding of my own research goals, was not possible until I situated my own research against that of a greater research community. The upshot of this exercise was my very own 'research map', on which, as the product of generalisation of people and groups, I felt I had a better idea of not only where the different groups where coming from, but also where I was going to. In this talk I want to present to you my research map and my experiences of and the lessons I have learned from creating it. Bridge building in multi-disciplinary research is now so important that all of us could benefit from doing a little map making every now and then. I hope therefore that this talk will be of some interest.


Explanatory Shifts and structures for knowledge
Donald Bligh (Department of Computer Science)
19 Nov 2004Harrison 171 Friday 4pmComputer Science (Internal)
Explanations involve fitting what is to be explained into a context that has already formed in the minds or brains of he who explains and he who tries to understand. But how did they come to understand that preformed context? By fitting it into an earlier preformed context and so on ad infinitum?

Well no, not quite. Eventually you reach fundamental elements of preformed contexts of which there are at least ten: sameness, difference, change, direction, force, awareness, like/dislike, obligation and intention. Just as items of knowledge are built upon previous knowledge, so their preformed contexts build upon each other. The elements build upon each other in ever more complex ways, a bit like complex molecular structure built of simple elements.

When asked to justify an explanation the reverse process of occurs – analysis or deconstruction of the explanation. If so the justification is in a different context from the explanation being justified. That is what I call an explanatory shift.


Theory of Molecular Computing. Splicing and Membrane systems
Pier Frisco (Department of Computer Science)
12 Nov 2004Harrison 171 Friday 4pmComputer Science (Internal)
Molecular Computing is a new and fast growing field of research at the interface of computer science and molecular biology driven by the idea that molecular processes can be used for implementing computations or can be regarded as computations. This research area has emerged in recent years not only as a novel technology for information processing, but also as a catalyst for knowledge transfer between the fields of information processing, nanotechnology and biology. Molecular Computing (together with research areas such as Quantum Computing, Evolutionary Algorithms and Neural Networks) belongs to Natural Computing which is concerned with computing taking place in nature and computing inspired by nature. In this talk I will give an overview of my research in the theoretical aspects of Molecular Computing. In particular I will talk about two theoretical models of computation: splicing systems and membrane systems.


Fractals and Image Processing
Professor Revathy
11 Nov 2004Harrison 209 Tuesday 3pmComputer Science


A Painless Introduction to Mereogeometry
Dr Ian Pratt-Hartmann (University of Manchester)
4 Nov 2004Harrison 209 Tuesday 3pmComputer Science
One of the many achievements of coordinate geometry has been to provide a conceptually elegant and unifying account of spatial entities. According to this account, the primitive constituents of space are points, and all other spatial entities---lines curves, surfaces and bodies---are nothing other than the sets of those points which lie on them. The success of this reduction is so great that the identification of all spatial objects with sets of points has come to seem almost axiomatic. For most of the previous century, however, a small but tenacious band of authors has suggested that more parsimonious and conceptually satisfying representations of space are obtained if we adopt an ontology in which regions, not points, are the primitive spatial entities. These, and other, considerations have prompted the development of formal languages whose variables range over certain subsets (not points) of specified classes of geometrical structures. We call the study of such languages `mereogeometry'. In the past decade, the Computer Science community in particular has produced a steady flow of new technical results in mereogeometry, especially concerning the computational complexity of region-based topological formalisms with limited expressive power. The purpose of this talk is to survey this work in general and (largely) non-technical terms. In particular, we aim to locate the various recent mathematical results in mereogeometry within a general mathematical framework. The result will be an inventory of stock and a list of open problems.


Professor Peter Cowling (Bradford University)
10 Jun 2004Harrison 209 Tuesday 3pmComputer Science


Machine Consciousness
Dr Owen Holland (University of Exeter)
3 Jun 2004Harrison 209 Tuesday 3pmComputer Science


The work at the Hadley Centre for Climate Prediction and Research, Met Office
Dr Vicky Pope (The Met Office)
26 May 2004Harrison 209 Tuesday 3pmComputer Science


Agents and affect: why embodied agents need affective systems.
Professor Ruth Aylett (Salford University)
13 May 2004Harrison 209 Tuesday 3pmComputer Science


Logic-based visual perception for a humanoid robot
Dr Murray Shanaham (Imperial College)
6 May 2004Harrison 209 Tuesday 3pmComputer Science


RSS meeting: Independent component analysis: flexible sources and non-stationary mixing
Richard Everson (University of Exeter (Computer Science))
25 Jan 2001Laver None NoneStatistics & Operational Research