School of Computer Science and Software Engineering

Seminars 2010

School Research Seminars Presented in 2010

  1. Software engineering research- looking back looking forward
  2. Partial fingerprint identification - it’s not a joke
  3. Spatio-spectral analysis for multispectral biometric recognition
  4. Tableaux for real time temporal logic
  5. Tracking pedestrians in video sequences
  6. Some developments in modelling bushfire dynamics
  7. A prediction market for the coordination of a heterogenous swarm of unmanned vehicles
  8. Generalised shift map for media retargeting
  9. Modelling of tissue growth: bioactive geometry and multiscale analysis
  10. Formal engineering methods for software quality assurance
  11. Making sense of text: the ReAD principles
  12. The impact of privacy on localization algorithms
  13. Design of optimal wireless sensor networks: performance analysis and enhancements of the IEEE 802.15.4 MAC protocol for unsaturated and saturated WSNs
  14. The implementation of parallel search algorithms on graphics processing unit (GPU) for graph theoretic problems
  15. Adaptive real-time self learning river basin learning: challenges
  16. Applications of mathematical modelling in tissue engineering
  17. Computational modelling of in vivo DNA label-retaining dynamics in hematopoietic stem cells
  18. Network is not just social
  19. Situation discovery from sensor data
  20. Trees, enzymes, computers: challenges in bioinformatics
  21. Improving the quality of student’s programming assignments
  22. Drift-correcting template update strategy for precision feature point tracking
  23. Space - there's alot of data out there
  24. Improving image retrieval using text mining and automatic image annotation
  25. Improving your erlang programs and tests with wrangler
  26. Robust people tracking based on scene learning

Software Engineering Research – Looking Back, Looking Forward

  • Professor Kevin Ryan - Software Engineering Research Centre of Ireland
  • 11am Friday 29th October

Much has been achieved in software engineering since the term was first coined over 40 years ago. However we still hear many justified criticisms of our field both by people within it and those on the outside who depend, increasingly, on our work. Some even question whether there is, or ever can be, a discipline of software engineering. In this talk it is argued that there are three forces driving change in software engineering - Specialisation, Industrialisation and Globalisation. Domain specific SE is on the rise - witness the rise in specialist tracks and conferences - therefore the peculiarities of different social and business domains must be taken into account. We need research that helps software engineers to understand and reflect these in their work. Industrialisation means that SE is increasingly a matter of making best use of available assets by re-using components and artefacts. Whether you own them, buy them or obtain them from the web, you must combine them opportunistically to meet new or anticipated requirements. Add to this the increased need for concurrent development of hardware, software and support systems and we can see that future SE must be far more open to variability and flexibility. Globalisation means that many of the components and artefacts will be sourced from distant locations and that the assembly process will be distributed across time, space and cultures. We need research on Global Software Processes since it is foolish to rely on any imported recipe that has been shown to work in a few, highly specific, locations and domains. This talk argues that there is a bright future for software engineering research provided it is undertaken by people who want to solve real and difficult problems posed by change and who are able to learn from the past without being its prisoner.

About the presenter

Professor Kevin Ryan was, from 2005-2010, founding Director of Lero, the national Software Engineering Research Centre of Ireland. Lero is a collaboration of four universities: the University of Limerick, Trinity College Dublin, University College Dublin and Dublin City University. Lero’s overall goal is to help develop the competence and awareness of software engineering in Irish industry and society.

Prior to becoming Centre Director for Lero, Kevin Ryan was Vice President Academic and Registrar of the University of Limerick and founding Dean of the College of Informatics and Electronics. His main research interests are in the field of software engineering and, in particular, in design methods, requirements engineering and the software process. He has been involved in a number of significant EU funded projects in the area of software methods and tools, including ToolUse and Atmosphere. With Dr. Joachim Karlsson he published some of the first papers on Requirements Prioritisation.

Kevin Ryan was organisation chair of the International Conference on Software Engineering (ICSE) in 2000, and of the IEEE International Symposium on Requirements Engineering (RE99) in 1999, both held in Limerick, and was Programme Chair of RE09 in Atlanta, Georgia. He is currently chair of the IFIP Working Group 2.9 on Software Requirements Engineering.

Back to top

Partial Fingerprint Identification – It Is Not A Joke

  • Dr. Jiankun Hu - School of Computer Science and IT, RMIT University
  • 11am Friday 22nd October

Fingerprints captured from crime scenes are latent prints unwittingly left by friction ridge skin on a surface. They are often obscure and partial. Through there exist some work on partial fingerprint matching or verification, little work has been reported on partial fingerprint identification that is far more critical in crime investigation. In practice no candidates would be available for matching against within the first place.

Existing fingerprint identification/indexing technologies fail to produce reasonable candidate list due to the need of full fingerprint features, e.g., minutia, orientation data etc. which cannot be obtained from the limited fingerprint fragments. A long standing perception has been that this would be an unsolvable problem as the missing parts or blank parts of the partial fingerprint can't provide any further information. In this talk we will present the first analytical solution, in this field, of partial fingerprint identification against very-large-scale database. The idea is to use null space shuttles to formulate a general solution space characterizing fingerprint sets exhibiting patterns seen in the available fingerprint fragments. Experiments have been conducted which have validated the proposed solution. Most of this work will appear in IEEE Transactions on Pattern Recognition and Machine Intelligence.

About the presenter

Dr. Jiankun Hu is an associate professor at the School of Computer Science and IT, RMIT University. He is the head of networking cluster in the Discipline of Distributed Computing and Networks. He has obtained his PhD in Control System Engineering from the Harbin Institute of Technology, China and masters by research in Software Engineering Monash University, Australia. He has worked as research fellow in several countries and universities including EE Dept. of Melbourne University, EE Dept. of Delft University, the Netherlands and Ruhr University Germany (sponsored by the prestigious Alexander von Humboldt Foundation). Dr. Hu's research interest is in the field of network security with focus on anomaly intrusion detection and biometric security. He has served as Chair for Network Security Symposium of IEEE ICC and IEEE Globecom which are considered as IEEE Flagship conferences. He is also Program Chair for the International Conference on Network Security (NSS) 2010 and 2011. Dr. Hu has also served as associate editor role for several ERA tier-A journals.

Dr. Hu has been awarded 5 ARC grants and has top publications including IEEE Transactions on Pattern Recognition and Machine Intelligence. The fingerprint orientation extraction model, FOMFE model, proposed by his group has been selected as the international benchmark algorithm.

Back to top

Spatio-Spectral Analysis for Multispectral Biometric Recognition

  • Zohaib Khan - PhD
  • 11am Friday 24th September

A Multispectral Image (MSI) measures the response of a scene to a number of discrete wavelengths of light. MSI can measure intrinsic spectral discriminatory information in physiological biometrics.

Biometrics based on spatial features have widely been used for personal identification. Recently, multispectral biometrics, especially the face and palmprint have received significant attention in research. Current approaches have mainly focussed on either spatial or spectral analysis of multispectral biometric data. These techniques are mostly an extension of current 2D monochrome or multimodal biometrics and do not fully exploit the spatio-spectral information.

This thesis propose spatio-spectral analysis of multispectral biometrics for improved recognition. It aims to explore the spatial variation of spectral response over the human face and palm in the wide spectral range for biometric recognition. MSI fusion techniques will be developed to combine useful spatio-spectral features.

Algorithms will be developed for spatially adaptive band selection to extract more discriminant features and to reduce the number of required MSI bands. This will reduce the cost and improve the efficiency of multispectral biometric acquisition. The proposed techniques will be evaluated in accordance with standard biometric benchmark procedures.

In this seminar, I will first introduce the concept of multispectral biometrics. Then I will provide an overview of the existing techniques for multispectral biometrics. I will then discuss the significance of the proposed spatio-spectral analysis. Finally, I will present the research plan for my PhD study.

Back to top

Tableaux for Real Time Temporal Logic

  • Ji Bian - PhD
  • 11am Friday 10th September

Temporal logic consists of many languages with rich tense structure used to express and reason about events in terms of time. There are a variety of temporal logics of reasoning tasks, and the basic model of time can be discrete natural, dense linear real numbers and algebras of intervals and trees. Despite wide interest, automatic processes are still only practical for small problems, and there is much potential for improvement.Some automated reasoning processes are based on tableaux. They are a prevalent modal reasoning technique. Compared with other ways of decision procedures,such as automata, tableaux are more intuitive and natural, as they can also provide the corresponding model in the result of reasoning. Despite the deep research of tableaux ,the tableaux for linear temporal logic over real-numbers time and metric temporal logic are not well studied .Especially, dense or real-number modal of time has a wide range of applications. It can be applied into database, artificial planning,and natural language processing, and it is very helpful to the specification,design and verification of complex systems. This suggests that the study into all dense or especially real time temporal will make great contribution to the ability of temporal reasoning and extend the usage of tableaux in reasoning about complex reactive systems. In particular, we propose to focus on decision procedure and create appropriate tableaux for real time.

Back to top

Tracking Pedestrians in Video Sequences

  • Zhengqiang Jiang - PhD
  • 11am Friday 3rd September

Visual tracking of pedestrians allows the 2D trajectory of each pedestrian in the video images to be determined.

The statistical data of such trajectories are useful for the planning of open space, localizations of entrance and exit doors, etc in buildings and offices.

In this talk, I will briefly describe my proposed visual human tracking system. In addition, I will present a method that combines colour and motion information to track pedestrians in video sequences. Pedestrians are firstly detected using the human detector proposed by Dalal and Triggs which involves computing the histogram of oriented gradients descriptors and classification using a linear support vector machine. For the colour-based model, I extract a 4-dimensional colour histogram for each detected pedestrian window and compare these colour histograms between consecutive video frames using the Bhattacharyya coefficient. For the motion model, I use a Kalman filter which recursively predicts and updates the estimates of the positions of pedestrians in the video frames. The tracking method has been evaluated using videos from two pedestrian video datasets from the web.

Back to top

Some Developments in Modelling Bushfire Dynamics

  • Professor John Dold - School of Applied Mathematics, University of Manchester
  • 11am Friday 27th August

The seminar will outline the ideas behind three topics in bushfire physics.

Firstly, the intensity of a line bushfire is not proportional to the spread-rate of the fire, as is often thought, except when the fire is spreading in a quasi-steady way. Modelling the way that intensity must accumulate as spread rates change shows that oscillatory or "surge and stall" behaviour of a fire as well as eruptive fire spread can be described.

Secondly, a model for ember-dominated fire spread is developed in which numerous spotting events occur. The model, which simply ensures a proper bookkeeping of the key properties of ember production, transport, ignition probability and spotfire growth, leads to a straightforward spread rate formula for ember-dominated fires.

Finally, the very strong effect that moisture content has on fire spread is investigated. Since water vapour is mostly driven out of the vegetation before pyrolysate fuel vapour, it is found to act as a physical barrier between fuel and oxygen that can then inhibit the combustion reactions.

Back to top

A Prediction Market for the Coordination of a Heterogeneous Swarm of Unmanned Vehicles

  • Aidan Morgan - PhD
  • 11am Friday 20th August

The field of unmanned vehicle (UV) research has become highly active in both commercial and academic applications, utilising a range of different platforms including aircraft, ground based vehicles, and sea vessels. UVs are currently being used throughout the world in a range of applications including military, search and rescue operations, and agriculture.

However, the coordination of a team of disparate UVs is difficult as there are many constraints to be managed, including distributing limited resources amongst the team, adapting to changes in the environment and mission objectives, and responding to teammate's actions. Many different approaches have been explored to address this problem, however, traditionally these approaches use a central authority to decompose the problem down into smaller tasks, distributing the planning and execution tasks to an individual agent, whilst centralising the higher-level coordination tasks.

This work explores the use of a prediction market from the financial arena to coordinate a "swarm" of heterogeneous UVs without the need for a central authority to assign tasks. Through the use of an implicit decentralised knowledge base, prediction markets can be used for belief propagation and consensus determination, with recent research showing this approach can be successfully used for predicting the outcome of events as diverse as elections, sporting events, and box office takings.

In this research proposal seminar, we will motivate the autonomous vehicle coordination problem, introduce the concept of a prediction market, examine its similarity and differences to existing market oriented techniques, and outline the planned research for this PhD investigation.

Back to top

Generalized Shift Map for Media Retargeting

  • Yiqun Hu - PhD
  • 11am Friday 6th August

With the development of diverse terminal devices of different screen factor and increasingly rich media data (e.g. images/videos), the research about media retargeting techniques are attracting great interest in the community of computer vision and graphics recently. Different from traditional blind resizing method e.g. cropping and scaling, new media retargeting methods increase or decrease the size of media according to the inherent content. The ‘important’ content can be better preserved during retargeting process. In this talk, I will cover our recent advances based on shift map for the problem of media retargeting. First, we target at the problem of video retargeting. A new solution called hybrid shift map has been proposed to generate retargeted video which looks natural with respect to source video in both spatial and temporal domain. We design a new criterion i.e. spatial-temporal naturality to measure the spatial and temporal similarity between input and output without any motion analysis. To improve the efficiency of graph-cut based optimization for video data, we design a new hybrid multi-resolution mechanism. Next, for multi-operator retargeting, we have proposed a novel solution by generalizing the integer-value shift map to real-value shift map. A unified retargeting space is introduced to accommodate all existing operators as well as a new type of hybrid operator. Through the graph cut optimization using a specific energy function which covers information loss, spatial naturality as well as sampling loss , our method adaptively integrates multiple operators for retargeting images

Back to top

Modelling of Tissue Growth: Bioreactor Geometry and Multiscale Analysis

  • Reuben O’Dea - University of Nottingham
  • 11am Friday 30th July

I will discuss two separate modelling approaches to describe (i) the influence of bioreactor geometry and mechanical effects on tissue growth and (ii) the formation of spatial patterns over a short lengthscale in developing tissue.

Firstly, a three phase model for the growth of a tissue construct within a perfusion bioreactor is examined. Through the prescription of appropriate functional forms for cell proliferation and extracellular matrix deposition rates, the model is used to compare the influence of cell density-, pressure- and culture medium shear stress-regulated growth on the composition of the engineered tissue. Solutions obtained in the long-wavelength limit are compared with 2D simulations to demonstrate that in order to capture accurately the effect of mechanotransduction mechanisms on tissue construct growth, spatial effects in at least two-dimensions must be included due to the inherent spatial variation of mechanical stimuli relevant to perfusion bioreactors.

The integration of microscale effects into tissue-scale formulations (such as that discussed above) is crucial to understanding tissue growth and mechanics. For instance, cell signalling mechanisms leading to fine-grained spatial patterning of cell differentiation are crucial in early tissue development. I have developed tissue-scale descriptions of this process based on generic discrete signalling models in a range of cellular geometries. Most applications of multiscale asymptotic methods are to continuous systems, while here discreteness on the short lengthscale must be taken into account. This work was undertaken within The mathematics of 3D tissue morphogenesis and regenerative medicine

(MRM) project at the University Nottingham which comprises experimentalists and theoreticians working on three broad themes: (i) Cell signalling pathways (ii) stem cell differentiation and (iii) tissue organisation. A brief overview of some of these experimental and modelling studies will be presented.

Back to top

Formal Engineering Methods for Software Quality Assurance

  • Shaoying Liu - Hosei University Japan
  • 11am Friday 23rd July

Conventional software engineering on the basis of informal or semi-formal methods is facing tremendous challenges in ensuring software quality.

Formal methods have attempted to address those challenges by introducing mathematical notation and calculus to support formal specification, refinement, and verification in software development. The theoretical contributions to the discipline of software engineering made by formal methods researchers are significant. However, in spite of their potential in improving the controllability of software process and reliability, formal methods are generally difficult to apply to large-scale and complex systems in practice because of many constraints (e.g., limited expertise, complexity, changing requirements, and theoretical limitations).

We have developed the ``Formal Engineering Methods’’ (FEM) as a research area since 1990 to study how formal methods can be effectively integrated into conventional software engineering process so that formal techniques can be tailored, revised, or extended to fit the need for improving software productivity and quality in practice (e.g., through the enhancement of the usability of formalism and the tool supportability of the relevant methods).

We have also developed a specific FEM called Structured Object-Oriented

Formal Language (SOFL) that offers rigorous but practical techniques for system modeling, transformation, and verification: three-step formal specification, transformation from structured specification to object-oriented implementation, and specification-based inspection and testing. The effective combination of these three techniques can significantly enhance software productivity and quality. The SOFL method has also achieved a good balance among the qualities of simplicity, visualization, and preciseness to allow engineers to easily use the method. In this talk, I will first give a brief introduction to FEM and then focus on the issue of how FEM is used for software quality assurance.

About the presenter

Shaoying Liu is Professor of Software Engineering at Hosei University, Japan. He holds a Ph.D in Formal Methods from the University of Manchester, U.K. His research interests include Formal Engineering Methods, Software Development Methodology, Software Inspection, Software Testing, Dependable Complex Computer Systems, and Intelligent Software Engineering Environments. He has published a book titled ``Formal Engineering for Industrial Software Development Using the SOFL Method’’ with Springer-Verlag, four edited conference proceedings, and over 100 academic papers in journals and conferences. He founded the International Conference on Formal Engineering Methods (ICFEM) in Japan in November 1997 and is currently serving as its Steering Committee Chair. He is also serving on the editorial board for the Journal of Testing, Verification and Reliability and on the Advisory Board of International Colloquium on Theoretical Aspects of Computing (ICTAC). He received an “Outstanding Paper Award’’ from the Second IEEE International Conference on Engineering of Complex Computer Systems (ICECCS1996), and was recognized as one of the 15 “Top Scholars in the Field of Systems and Software Engineering (1993-1996)” by the Journal of Systems and Software in 1997. Liu is a Fellow of British Computer Society, a Senior Member of IEEE

Computer Society, and a member of Japan Society for Software Science and Technology.

Back to top

“Making Sense of Text: The ReAD Principles”

  • Dr Louis Massey - Royal Military College of Canada
  • 11am Friday 16th July

An important practical problem in artificial intelligence is to design algorithms capable of autonomously determining what a text document is about, that is, its topics. The main motivation in solving this problem is to support information retrieval applications such as web search engines and corporate document management systems. Unfortunately, current techniques such as text clustering and document categorization fail to capture the inherent meaning of text. Given the continually increasing amount of human knowledge stored in electronic text repositories, the resolution of this issue is critical. I will present a totally new type of algorithm that discovers topics in a single text document, without reliance on corpus statistics and without the usual bag-of-words vector representation. The algorithm I will present takes the meaning of words into account and is based on principles that differ radically from existing natural language processing and artificial intelligence techniques.

About the presenter

Dr Massey is an Assistant Professor in the department of Computer Science at the Royal Military College of Canada. Before joining academia, he was a senior officer in the Canadian Air Force, specializing in Information Technology Support. During his military career, Dr Massey gained a vast practical experience in project management and in information systems design. His professional experience influenced Dr Massey’s research interests in scalable, real-world textual information management. Recently, Dr Massey pioneered a method to determine the topics of documents for which a patent has been filed in April 2010. Dr Massey also has interests in the human, professional, organisational and social aspects of computing. In addition to his passion for teaching and research, Dr Massey loves to spend time hiking, writing fiction and painting.

Back to top

The Impact of Privacy on Localization Algorithms

  • Chang Liu - PhD
  • 11am Friday 2nd July

Users' locations is the most important attribute employed when delivering personalised services. With the dramatic increase in the use of smart phones and other WiFi-equipped mobile devices, WiFi-based localization has remained as an active research area in recent years. Many existing WiFi localization algorithms can achieve room-level accuracy when deployed within indoor environments.

In this talk I'll first present some comparisons on the accuracy of the most widely used WiFi localization algorithms that have been re-examined under simulation. Users may have their own distinct degree of sharing their personal information due to their different demands for privacy protection, which will affect the potential accuracy of personalized localization algorithms as well. I'll then talk about planned real-world experiment in an open, outdoor environment.

Back to top

Design of Optimal Wireless Sensor Networks: Performance Analysis and Enhancements of the IEEE 802.15.4 MAC Protocol for Unsaturated and Saturated WSNs.

  • Alvaro Monsalve - Swinburne University of Technology
  • 11am Tuesday 22nd June

Wireless sensor networks (WSNs) have emerged as a new technology with potential real applications in the field of monitoring systems. A sensor node is a battery-powered device that requires an efficient use of the energy resources and efficient share of the wireless communication medium. Consequently, to have realistic estimates of the performance of a network adds a significant value to the design process, by enabling the choice of better design alternatives.

In this talk, we present a simple analytical model developed to evaluate the performance of the widely adopted IEEE 802.15.4 Medium Access Control (MAC) protocol based on a carrier sense multiple access with collision avoidance (CSMA-CA) mechanism in a single-hop unsaturated network. Additionally, we derive the optimal conditions (sensing rate, failure probability and packet arrival rate) for which channel throughput is maximum in unsaturated and saturated networks. We propose an algorithm for the design of optimal unsaturated networks with Poisson arrival, where optimal refers to maximum channel throughput. Furthermore, a simple modification of the CSMA-CA mechanism was proposed for the design of optimal saturated WSNs.

About the presenter

Alvaro Monsalve is currently conducting research at Swinburne University of Technology. Alvaro received his Bachelor in Electronics Engineering from Simon Bolivar University, Venezuela and Masters in Wireless Systems and Related Technologies from Polytechnic of Turin, Italy. In 2007 he was a research assistant in the Laboratories of Telecom Italia in Turin. His main research interests include wireless technologies, design of protocols and mechanisms for wireless sensor networks.

Back to top

The Implementation of Parallel Search Algorithms on Graphics Processing Unit (GPU) for Graph Theoretic Problem

  • Naresh Bhatty - Masters
  • 11am Friday 18th June

Search algorithms are the back bone of artificial intelligence. The efficiency of these algorithms can be multiplied by running them in parallel on different processing devices and consolidate all outputs to find the required result. The implementation of these parallel algorithms on Central Processing Unit (CPU) yields better results than sequential algorithms but the overall performance is limited to the processor count in CPU. Here we present the parallel implementation of graph search algorithms on Graphics processing Unit (GPU) using Compute Unified Device Architecture (CUDA). GPUs are equipped with hundreds of processors on card and are reasonably priced compared to CPU processors. CUDA is a massively parallel computational environment designed to unleash the vast computational power of GPUs. We expect this parallel implementation of search algorithms on GPU to be effective for a variety of graph theoretic problems like Travelling Salesman Problem and Vertex Cover Problem.

An effort was made to implement A* search algorithm without a proper heuristic but still the performance of GPU surpassed that of CPU implementation with heuristic. A serial implementation of these algorithms was compared to parallel GPU implementation. Results obtained from initial experiments show that GPU implementation achieved a speed up of between 110X to 142X.

Back to top

Adaptive Real-Time, Self Learning River Basin Learning: Challenges.

  • Jorg Imberger - Director Centre for Water Research
  • 11am Friday 11th June

Natural systems such as catchments, rivers, lakes, estuaries and coastal seas are under increasing threat from depletion of biodiversity, nutrient enrichment, metal contamination and introduction of very low levels of carcinogenic compounds. Human development is the cause of this degradation and so there is an urgent need to develop quantitative management strategies that allow balanced objectives to be achieved between the material benefits of development and the dangers of degradation of the environment. A new methodology, based on the Index of Functional Sustainability (ISF) has recently been developed that provides such a quantitative foundation. This methodology may be coupled with real time measurements of water properties in a natural system, the data from which are checked for integrity and then archived into a flexible relational data base system by the Aquatic Real Time Management System (ARMS). ARMS also controls a series of numerical models (Dynamic River Model (DYRIM), Dynamic Reservoirs Simulation Model (DYRESM), Estuarine, Lake Computational Model (ELCOM) and Computational Ecological Aquatic Dynamic Model (CAEDYM)) that run in real time using the real time data. ARMS then also carries out validation comparisons with real-time data from within the domain and performs some self learning corrections to the codes and forcing data. Further, ARMS automatically initiates, at regular intervals, simulation runs of pre-specified scenarios computing the associated ISF ready for interrogation at a manager’s convenience. A web based interrogation tool, called OLARIS, is used for both mining the real time database and the results from the ARMS initiated simulations. The suite of new instruments and software combined with the ISF collectively offer a totally new way managing natural water bodies. The talk will illustrate the new methodology as applied to three operating examples; Swan Canning, Western Australia, Lake Iseo Italy and Rio de la Plata Estuary, Argentina.

Back to top

Applications of Mathematical Modelling in Tissue Engineering

  • Edward Green - CSSE
  • 11am Friday 4th June

Tissue engineering might be called 'the science of spare parts'.

Although currently in its infancy, its long-term aim is to grow functional tissues and organs in vitro to replace those which have become defective through age, trauma or disease. But how do cells know how to make a tissue? A number of factors are known the affect the architecture of tissues grown in vitro, including nutrient levels, the concentration of various growth factors, applied forces, and the type of material in which the cells are seeded, but in this talk, I will focus on cell-cell and cell-extracellular matrix interactions.

I will begin by looking at the interaction of two cell populations, and how this can influence the structure of cell aggregates grown in vitro.

Hepatocytes and stellate cells are two types of liver cell, which, when cultured together, form aggregates more rapidly, and which remain viable and functional for longer, than when hepatocytes are cultured alone. We have developed a new mathematical model to investigate two alternative hypotheses or the role of stellate cells in promoting aggregate formation. Under Hypothesis 1, each population produces a chemical signal which affects the other, and enhanced aggregation is due to chemotaxis. Hypothesis 2 asserts that the interaction between the two cell types is by direct physical contact: the stellates extend long cellular processes which pull the hepatocytes into the aggregates. The behaviour of the model under each hypothesis is studied using a combination of linear stability analysis and numerical simulations. The results show how the initial rate of aggregation depends upon the cell seeding ratio, and how the distribution of cells within aggregates depends on the relative strengths of attraction and repulsion between the cell types. We can also use our model to suggest experiments which could be performed to distinguish between the two hypotheses.

In the second part of the talk, I will look at the mechanical properties of biological materials, which are thought to have a strong influence on cell behaviour. Biological gels (such as collagen gels) used in tissue engineering have a fibrous microstructure which affects the way forces are transmitted through the material, and in turn affects cell migration and other behaviours. In order to understand the effects of mechanical interactions between the cells and the matrix on tissue architecture, we need to understand the mechanics of the gels themselves. I will present a simple continuum model of gel mechanics, based on treating them as transversely isotropic viscous materials. Two simple canonical problems are considered involving thin two-dimensional films: extensional flow, and squeezing flow of the fluid between two rigid plates. Neglecting inertia, gravity and surface tension, in each regime we can exploit the thin geometry to obtain a leading-order problem which is sufficiently tractable to allow the use of analytic methods. I discuss how these results could be exploited practically to determine the mechanical properties of real gels, and how this work could be extended to explore the role of gel mechanics in determining tissue architecture.

Back to top

Computational Modelling of in Vivo DNA Label-retaining Dynamics in Hematopoietic Stem Cells

  • Richard Van Der Wath - CSSE
  • 11am Friday 28th May

"Hematopoietic stem cells (HSCs) are responsible for both lifelong daily maintenance of all blood cells as well as for repair after cell loss caused by infection or toxic insults. Recently biological evidence have been found which indicates a small subset of HSCs (d-HSCs) is predominantly dormant but can be reversibly induced to active proliferation upon injury. In this talk I present computational analyses further supporting the d-HSC concept through extensive modelling of experimental DNA label-retaining cell (LRC) data. I will demonstrate how quantifying HSC division kinetics can help test hypotheses about the population biology of HSCs. Instead of advocating a single type of model for the specific system under consideration, I follow a comprehensive modelling approach. Based on two related but independent datasets I define mathematical models that are discrete, continuous, stochastic, or deterministic. Each model has its own strengths and weaknesses and with each we learn something different about the data."

Back to top

"Networking is Not Just Social"

  • Dr Chris McDonald - CSSE
  • 11am Friday 21st May

The edge of the Internet is increasingly wireless and mobile. The current generation of Computer Science students are far more likely to engage with the Internet through their own wireless, mobile devices than they are using wired, desktop computers. Traditional approaches to teaching computer networking were formed when the Internet was primarily a fixed wired infrastructure, and this historical background still forms most of the material in contemporary textbooks on computer networking.

Today's students have strong expectations that their computer networking studies will have a significant focus on the networking devices and applications that they use daily - increasingly mobile and wireless.

This informal seminar will present an overview of the opportunities that mobile devices offer to the teaching of networking courses by discussing this semester's undergraduate project in Computer Networks. The seminar will focus on the goals of the project, the challenges experienced by our students, and the mechanics necessary to support such projects.

Back to top

Situation Discovery From Sensor Data

  • Anthony Blond - PhD
  • 11am Friday 14th May

The situation recognition problem is to extract information in the form of patterns of events obtained from sensors. Supervised learning techniques can be employed to discover these patterns, but must be adapted before being used directly on sensor data. We present results that demonstrate that supervised learning can be used for the automated extraction of situation patterns from real smart home data. We identify advantages of this process over the manual specification of situation representations, and discuss shortcomings that need to be overcome.

Back to top

Trees, Enzymes, Computers: Challenges in Bioinformatics

  • Greg Butler - Computer Science and Engineering, Concordia University, Montreal
  • 11am Friday 7th May

The focus of our research is on sustainability through the replacement of chemical processes (often using petrochemicals) with biological processes using enzymes. We search the genomes of fungi for enzymes with potential for industrial applications utlizing reusable non-food biomass such as trees, straws, and grasses. So enzymes that decompose ligno-cellulose, the building blocks of plant cell walls, are important.

Bioinformatics is the use of computers to manage, analyze, and mine data to assist bench scientists to prioritize their work, and to translate data into knowledge. This talk will highlight some of the gaps between what we would like to do, and what we currently do. In particular, I will emphasis some roles of tree and graph data structures in bioinformatics algorithms particularly those for annotation of enzyme function using phylogenomics

Back to top

Improving the Quality of Student’s Programming Assignments

  • Rachel Cardell-Oliver - CSSE
  • 11am Friday 23rd April

In current practice student programming assignments are characterised by short deadlines and marking criteria that are focussed on functional correctness. Student programming assignments rarely have the professional context of an ongoing software lifecycle or of a development team. A necessary consequence of doing software development in this context is that many aspects of software quality are ignored. This situation motivates the research question, how can educators change existing pedagogy and assessment to improve the quality of students' programming assignments within the given time constraints? In this seminar I will outline changes that I have introduced into first year programming units in order to improve students' understanding and practice of software quality. I will also present results of experimental evaluations of the effectiveness of these changes.

Back to top

Drift-correcting Template Update Strategy for Precision Feature Point Tracking

  • Xiaoming Peng - CSSE
  • 12pm Friday 16th April

In this seminar I will talk about a previous work on template-based interest point tracking. First, I will give a review of the problems of existing feature-based and template-based methods. Then, I will present a drift-correcting template update strategy for precisely tracking a feature point in 2D image sequences. In the template update strategy a non-rigid registration step is incorporated to extend the tracking sustainability of an original drift-correcting template update strategy. Finally, I would like to hear good ideas from the audience on how to improve the work.

Back to top

Space – There’s a Lot of Data Out There

  • Kevin Vinsen - International Centre for Radio Astronomy Research (ICRAR)
  • 11am Friday12th March

The Hitchhikers Guide to the Galaxy says "Space, is big. Really big. You just won't believe how vastly hugely mind bogglingly big it is. I mean you may think it's a long way down the road to the chemist, but that's just peanuts to space, LISTEN!" and so on... Douglas Adams was absolutely correct.

The International Centre for Radio Astronomy Research (ICRAR) aims to achieve research excellence in astronomical science and engineering. As a coherent and unified part of Australia’s national effort, ICRAR is making a fundamental contribution to the realisation and scientific success of the Square Kilometre Array (SKA) and ICRAR will drive Australia's bid to be the site of a new radio telescope capable of seeing the early stages of the formation of galaxies, stars and planets.

The ICRAR Archive Research Environment (ARE) will act as the main storage facility for the other ICRAR Phase 1 system components. These include the Simulations, Murchison Wide-Field Array (MWA) Realtime Computer (RTC), Architecture Research and Development Environment, and Front End systems. The ARE will fulfil two roles for these components; firstly, it will provide long-term archiving for the data products of the Simulation and MWA RTC components. Secondly, it will act as a data buffer for the Architecture Research and Development Environment where specialist High Performance Computing Research will be performed. This will provide high speed parallel access to the archived data, as well as temporary data, such as ASKAP results retrieved from the CSIRO archive for processing. Access and control of the ARE will occur via the Front End component, allowing ICRAR researchers to search, process, manage, visualise and retrieve data products.

In Phase 1, The Simulation and MWA RTC components will produce data products at the rates of 0.3 PB/year and 5.9 PB/yr respectively. Along with all the other science this means the initial archive size for the first 18 months will be around 11.2 petabytes. Subsequent phases will need to increase this capacity by at least 6.15 PB per 12 months of operation. If Australia wins the SKA by 2020 this will mean storing ExaBytes of data.

Back to top

Improving Image Retrieval Using Text Mining and Automatic Image Annotation

  • Yinjie Lei - PhD
  • 11am Friday 26th February

The Internet is at present the most efficient platform for obtaining and sharing information. However, the management and usability of digital images comes under threat from the explosion of digital images. For this reason, image searching and retrieval has been a very active research field

In this research, a novel image retrieval system is proposed to overcome the drawbacks of query-by-example based image retrieval. The goal of this research is to assist image retrieval by supplying users with a keyword based interface that is similar with those image retrieval systems that use image descriptors. Moreover, most of the current keyword-based systems produce ambiguous images due to the polysemous nature of words. So, this project attempts to fuse automatic image annotating with text mining techniques to produce a novel way for image retrieval. In addition, owing to the immaturity of automatic image annotation algorithms, this project looks at more robust and accurate algorithm for auto-image annotation

In this presentation, I will focus on reviewing text mining technique and several image auto-annotation techniques. And more important, I will also describe the novel image retrieval system which integrates text mining and image auto-annotation techniques in my proposed project.

Back to top

Improving Your Erlang Programs and Tests With Wrangler

  • Simon Thompson - University of Kent, UK
  • 11am Friday 19th February

Wrangler is a refactoring tool for the Erlang programming language, written in Erlang itself. After introducing the tool and showing some of its features I will concentrate on the 'similar code' detection facilities of Wrangler, and discuss how they are implemented. These features, when combined with its portfolio of refactorings, allow test code to be shrunk dramatically, under the guidance of the test engineer, and this is illustrated in an extended example of some commercial test code. The talk will also include an overview of the ProTest project sponsored by the European Union, and which funds this work.

Back to top

Robust People Tracking Based on Scene Learning

  • Zhengqiang Jiang - PhD
  • 11am Friday 29th January

At present, it is still difficult to establish a robust video-based people tracking system in dense crowded scenes because of shadows and occlusion. Visual people tracking systems are related to many areas, such as people recognition based on motion, video retrieval and human-computer interaction.

In this project, I will develop novel computer vision techniques for video-based people tracking that are suitable for complex environments. The objective of these techniques is to track people in the presence of partial or complete occlusion. The pipeline of the proposed system can be divided into two main processes: to acquire knowledge of the scene and obstacles and to implement the proposed people tracking system based on such knowledge. In the first stage, the scene learning step is to identify the position and orientation of the ground plane relative to the video cameras. Similarly, the obstacle learning step involves identifying the location of obstacles relative to the video cameras. After the scene and obstacle learning stage, my proposed system will have the overall knowledge about the scene. With such knowledge, moving people in the image plane can be projected onto the ground plane and be tracked on that plane.

In this talk, I will focus on several visual tracking techniques that I have investigated for the literature review part of the project. These techniques include template matching, optical flow methods, homography-based techniques, appearance-based models, colour-based models, state space methods, etc. Furthermore, I will also describe the various components involved in my proposed visual people tracking system.

Back to top


School of Computer Science and Software Engineering

This Page

Last updated:
Wednesday, 13 February, 2013 8:23 AM