School of Computer Science and Software Engineering

Seminars 2012

School Research Seminars Presented in 2012

  1. White noise techniques in fly motion vision
  2. Three Dimensional Audio Visual Biometric Recognition
  3. Real Time Object Recognition using RGB-D Video Data
  4. Robust Three Dimensional Audio-Visual Speech Recognition
  5. 3D Facial Morphometric Analysis: Applications to Syndrome Delineation
  6. Verifying Real-time Systems Using Dense-time Model Checking Technology
  7. Computational methods for refinement quantified modal logics
  8. A computer-assisted framework based on the Bloom-Anderson taxonomy for teaching mathematics in early primary years
  9. 100 years of Alan Turing
  10. What's Happening in Computer Science Education?
  11. From logic to automata … and now back to logic
  12. Verifying Real-time Systems Using Dense-time Model Checking Technology
  13. Large Scale Web Serving
  14. Computation Elimination Algorithms for Correlation Coefficient Based Fast Template Matching
  15. A computer-assisted framework based on the Bloom-Anderson taxonomy for teaching mathematics in early primary years
  16. Detecting Skyline points: How to support the decision making process when the size of data is overwhelming

White noise techniques in fly motion vision

  • Karin Nordström, Department of Neuroscience, Uppsala University
  • Friday 16th November 2012
Abstract

Many animals, including humans and other vertebrates, visually orient towards salient features when navigating in the surround. Such figures may represent perch sites, landing sites, landmarks used for home-base navigation, feeding sources for nectar-eating insects or birds, or other salient features in the natural world. Visual figure tracking has been particularly well studied in insects, which have been shown to actively fixate a high-contrast bar in the frontal visual field during flying and walking. Fly orientation towards salient features is robust not only in freely behaving animals, but the behavior can also be closely mimicked in closed or open loop in virtual flight arenas.
In higher order motion stimuli the direction of object motion does not follow the direction of luminance change. Such stimuli might e.g. be generated by the wing movements of a flying bird, and could be further complicated by the bird’s motion in and out of shadows. It has long been known that humans readily perceive the direction of higher-order motion, despite it standing in stark contrast to prevailing motion vision models. Surprisingly, fruitflies are also able to follow the direction of higher-order motion cues.
To investigate the neurophysiology underlying the behavioral tracking of bars containing higher-order motion cues, we use intracellular electrophysiology of motion vision sensitive neurons in the optic ganglia of the fly. We record responses to stimuli containing either pure elementary motion, pure figure motion, or a combination of both in the form of Fourier and theta motion. Using this method, we show that motion sensitivity can be broken down into two separate streams, which code for elementary motion and figure position, respectively. The second one is particularly interesting, since it is not encoded by standard motion detectors. Furthermore, we show that we can predict the responses to Fourier and theta motion, based on the individual responses to figure and elementary motion, respectively.
We are hoping to develop more efficient protocol for probing neurons by using white noise techniques to study feature sensitivity. During my talk I will present some of our pilot data using this approach, and I look forward to receiving feedback on alternative methods.

About the Presenter

Karin Nordström completed her PhD in Cell and Organism Biology at Lund University in 2003, working on the evolution of eyes. She then did her post-doctoral training at the University of Adelaide, supported by the Swedish and Australian research councils. During her time with David O’Carroll she learnt intracellular electrophysiology and used this technique to study single neurons sharply tuned to moving features. By the time she left Adelaide, in 2009, she was a Lecturer in Physiology. Nordström currently holds a senior research fellowship, funded by the Swedish research council, at Uppsala University. She heads a small group working on motion vision in flies. Her group web page can be found at
http://www.neuro.uu.se/research/physiology/neurophysiology-of-motion-vision/.

Back to top


Three Dimensional Audio Visual Biometric Recognition

  • Mohammad Rafiqul Alam
  • Wednesday 14th November 2012
Abstract

Audio Visual (AV) biometric system is a multimodal biometric system that uses speech and face/mouth images for person recognition. Fusion of multiple modalities of different nature/dimensions and robustness to noise put challenges to developing a high-performance AV biometric system. Existing research in AV biometrics are also focused on handling noise in data, extracting features from the two dimensional (2D) face/mouth images and speech signal, and fusing the modalities. However, incorporation of three dimensional (3D) facial features is still an unexplored yet promising research area. This study aims to integrate speech and 2D-3D facial features for person recognition. The proposed system will be robust to noise and more versatile in terms of its applications.

About the Presenter

Mohammad Rafiqul Alam is a PhD student supervised by Prof Mohammed Bennamoun, A/Prof Roberto Togneri and Asst/Prof Ferdous Sohel

Back to top


Real Time Object Recognition using RGB-D Video Data

  • Syed Afaq Ali Shah
  • Friday 9th November 2012
Abstract

2D/3D object recognition has gained significant attention from the research community over the past few decades.  The goal of object recognition is to determine the identity of an object being observed in a scene from a set of known objects in a database. Automatic real-time object recognition will be a key feature of future generation robots. The focus application of these robots is to provide assistance to the elderly and people with disabilities. Other applications include underwater exploration, rescue operations, maintenance operations for oil and gas applications, or military operations such as cave clearing, search for underground tunnels or remote monitoring. Despite decades of research in robot vision, the performance of existing state of the art recognition systems does not allow a mainstream deployment of autonomous robotic object recognition systems in real-world environments. Occlusion, clutter and variations in illumination limit the recognition performance. Moreover, existing object recognition systems operate primarily in controlled environments, e.g. plain background scenes. This research will explore automatic 3D object recognition in real time under clutter and occlusion. The proposed study will use improved image features and techniques to accurately recognize 3D objects in a scene in real time.

About the Presenter

Syed Afaq Ali Shah Supervised by: W/Prof. Mohammed Bennamoun, Associate Prof. Farid Boussaid, Assist Prof. Amar A. El-Sallam School of Computer Science and Software Engineering, UWA.

Back to top


Robust Three Dimensional Audio-Visual Speech Recognition

  • Chao Sui - PhD
  • Tuesday 6th November 2012
Abstract

Inspired by human perception systems, acoustic signals are not the only source for individuals to interpret languages. Alternatively, listeners also obtain verbal information from nonverbal sources, for example, lip movements. The technique that utilizes visual information and acoustic signals simultaneously for the speech recognition task is called audio-visual speech recognition (AVSR). Although 2D-based AVSR is considered a more robust method compared to audio-only ASR (as it emulates human speech recognition), it is susceptible to illumination and face pose variations. Given the decreasing costs of commercial stereo cameras and 3D scanning technology, 3D-based AVSR is viewed as an effective solution to the limitations of its 2D-based counterpart. This research aims at finding an accurate and robust 3D-based AVSR techniques which can be applied in real life applications.

About the Presenter

Chao Sui is a PhD student with Mohammed Bennamoun & Roberto Togneri

Back to top


3D Facial Morphometric Analysis: Applications to Syndrome Delineation

  • Syed Zulqarnain Gilani  - PhD
  • Friday 19th October 2012
Abstract

It is well known that the foetal brain and face develop in close harmony to each other. Therefore, any anomaly in the growing brain is likely to be reflected in the face. Autism Spectrum Disorder (ASD) is the most prevalent mental disorder and is believed to be linked to the prenatal testosterone levels which are in turn related to masculinity. Psychological research shows that masculinity can be assessed from the face.  Therefore, 3D morphometric facial analysis may enable us to objectively measure the masculinity/ femininity of the human face, thereby helping in ASD delineation.  Early intervention is the only key to reduce the effects of ASD which reflects itself in the behavioural traits of a child and cannot be diagnosed until 3 years of age in the worst case. This research aims at morphometric analysis of the 3D face to objectively assess masculinity for early delineation of  ASD in prepubertal children.

About the Presenter

Syed Zulqarnain Gilani is a PhD student with Dr Ajmal Mian & Professor Murray Maybery.

Back to top


Verifying Real-time Systems Using Dense-time Model Checking Technology

  • Omar Al-Bataineh - PhD
  • Friday 27th July 2012
Abstract

Real-time systems are systems that designed to run applications and programs with very precise timing and a high degree of reliability. These systems can be said to be failed if they can not guarantee response within strict time constraints. Hence, real-time systems need to be rigorously modeled and verified in order to have confidence in their correctness with respect to the desired properties. One of the promising verification techniques that has been used successfully in checking critical systems such as hardware designs, communication protocols, and distributed systems is the model checking technique.
In the first part of the talk we will give an overview of the current state of the art of model checking real-time systems and then discuss commonly used formalisms and specification languages for model checking real-time systems. The second part of the talk will cover some recent results based on our paper in TIME 2012, in particular, we will discuss how one can use dense-time model checking technology to perform a Worst-Case Execution Time (WCET) analysis and then to discover the scenarios that lead to WCET.

About the Presenter

Omar Al-Bataineh is a PhD student with Mark Reynolds, Tim French, and Terry Woodings.  Al-Bataineh has received his ME degree at UNSW in 2011.  His main research interests are on temporal logic, epistemic logic, model checking technology, and knowledge-based programs.

Back to top


Computational methods for refinement quantified modal logics

  • James Hales - PhD
  • Friday 20th July 2012
Abstract

Informative updates are events that provide agents with additional information about the state of the world, whilst leaving the world itself unchanged. Refinement quantified modal logics allow us to pose questions about  the existence of informative updates that result in particular states of knowledge for the agent, subject to certain constraints. This project aims to investigate computational methods in the resulting refinement quantified modal logics, including methods to synthesise informative updates according to desired properties in refinement quantified epistemic modal logic.

Back to top


A computer-assisted Framework Based on the Bloom-Anderson Taxonomy for Teaching Mathematics in Early Primary Years

  • Nasrin Moradmand – PhD
  • Friday 15th June 2012
Abstract

With the world moving rapidly into digital media and information, the ways in which learning activities in mathematics can be created and delivered are changing. However,to get the best results from the integration of ICTs in education, any application’s design and development need to be based on pedagogically appropriate principles, in terms of both learners’ and teachers’ needs.

In the first part of this seminar, I will point to some of the experiences that have been gained in regard to conducting a research project between computer science and education department and the process of getting permission to test and use a research project outcome in a real word setting (in this case in primary schools). In the second part, a new framework for using ICT in the lower primary mathematics classroom, based on current pedagogical views of mathematics education and a cognitivist learning theory, will be presented. With this framework, it has been possible to design and develop an interactive multimedia application for teaching mathematics in lower primary school classrooms that is currently being trialed at several schools (both public and private). Finally the evaluation of the framework and the multimedia application will be discussed.

Back to top


100 years of Alan Turing

  • Prof Mark Reynolds - Head of School of CSSE Logic and Computation Research Group in ECM/CSSE
  • Friday 8th June 2012
Abstract

This year is Alan Turing's centenary. He was born on June 23rd 1912 and is widely regarded as the father of computer science.

To celebrate, I will present a account of some of Turing's main contributions to Computer Science and mention some of the other events going on this year.

There are many other events planned to celebrate his contribution.

You might also like to check out:

1)The Alan Turing Centenary Conference http://www.turing100.manchester.ac.uk/

2) The 2012 Turing Year site http://www.mathcomp.leeds.ac.uk/turing2012/

3) A bit about Turing and his work http://en.wikipedia.org/wiki/Alan_Turing

Back to top


What's Happening in Computer Science Education?

  • Dr Chris McDonald
  • Friday 18th May
Abstract

This informal seminar will report on two international conferences held earlier this year, and considered important to the Computer Science Education community.

The seminar will provide an overview of the themes and trends evident at the conferences, including international curricula, accreditation, research-led investigations into Computer Science pedagogy, the growth in teaching through computing devices, and notable projects and industry support.

Back to top


From Logic to Automata … and Now Back to Logic

  • Anthony Widjaja Lin
  • Friday 11th May 2012
Abstract

Logic and automata are two sides of the same coin. There is a long tradition in mathematical logic and computer science to solve logical problems by reductions to automata problems. This tradition was initiated by Buchi in 1960 who showed that weak monadic second-order theory of one successor is decidable by a reduction to language emptiness of finite-state automata. Buchi’s insights have proven to be extremely valuable in the fields of formal verification, databases, and theory of knowledge (to name a few), in which many problems with logical flavors have hitherto been solved by reductions to problems for automata. Nowadays, some of these techniques are an integral part of formal verification tool suites in industry (e.g. Intel, Motorola, NASA).

In the past decade, there has been a lot of progress in the development of fast SAT-solvers and more general SMT (Satisfiability Modulo Theories) solvers. Nowadays SAT/SMT solvers can successfully deal with boolean formulas with a very large (more than 10^6) number of variables. Triggered by this progress in the development of SAT/SMT solvers, there has been a recently emerging trend to solve automata problems (esp. in the field of formal verification) by reducing to logical theories, for which there are ``fast solvers’’. In formal verification, automata models are frequently used for modeling real-world systems since they are clean, expressive, and more amenable to analysis. In this talk, I will discuss several recent results of this flavor in the area of formal verification. In particular, I will focus on automata problems that can be reduced to checking satisfiability of ``tractable’’ fragments of Presburger Arithmetic, for which there are fast solvers like Z3.

About the Presenter

Anthony W. Lin received his PhD from University of Edinburgh's School of Informatics in 2010 under the supervision of Leonid Libkin and the co-supervision of Richard Mayr. His PhD thesis concerns generic and specific techniques for infinite-state model checking. He is currently an EPSRC Postdoctoral Research Fellow at Oxford University Department of Computer Science. His main interests lie in developing theoretically sound and practically applicable techniques in formal verification, utilizing techniques from logic and automata. Dr. Lin is known for his contributions in infinite-state verification, especially on methods for reducing infinite-state verification problems to ``tractable'' fragments of Presburger Arithmetic (e.g. for which fast SMT solvers are available). He was awarded LICS 2010 Kleene Best Student Paper Award for his paper titled "Parikh Images: Complexity and Applications".

Back to top


Verifying Real-time Systems Using Dense-time Model Checking Technology

  • Omar Al-Bataineh - PhD proposal
  • Friday 23rd March 2012
Abstract

Real-time systems are systems that are designed to run applications with very precise timing and a high degree of reliability. Ensuring the correctness of real-time systems is a challenging task. This is mainly because the correctness of real-time systems depends on the actual times at which events occur. Hence, real-time systems need to be rigorouslymodeled and verified in order to have confidence in their correctness with respect to the desired properties.

Temporal logic model checking has been successfully used to find non-trivial errors in hardware designs, distributed systems, and security protocols. Model checking real-time systems have been a topic of recent interest. In the last decade, researchers have developed several model checking tools for the purpose of model-checking real-time systems such as HyTex, KRONOS, UPPAAL, and RED.

The purpose of this research is to significantly advance both theory and algorithmic of design and verification of real-time systems. An important outcome of this project will be algorithms for model checking real-time systems. Another outcome of this project will be to enhance our understanding of the use of temporal logic for reasoning about real-time systems, building on the highly successful use of temporal logic for discrete-time systems.

About the Presenter

Omar Al-Bataineh: PhD student with Mark Reynolds, Tim French, and Terry Wooding.

Back to top


Large Scale Web Serving

  • Simon Newton - Google Inc
  • Friday 9th March 2012
Abstract

This talk describes at how to scale web serving infrastructure from a single machine running in a garage to sites the size of Google and Facebook. It includes a discussion of how an obscure network bug was diagnosed and fixed, as well as some of the lessons learnt along the way.

About the Presenter

Simon graduated from UWA in 2005 after studying Computer Science and Software Engineering. After working for iiNet he joined Google in 2006 and relocated to San Francisco where he now works as a tech lead for the Edge Infrastructure group.

His hobbies include working on the Open Lighting Project, an effort that grew out of his Honours dissertation aimed at building reliable, open source software for the entertainment lighting industry.

Back to top


Computation Elimination Algorithms for Correlation Coefficient Based Fast Template Matching

  • Dr. Arif Mahmood
  • Friday 16th March 2011
Abstract

Template matching is frequently used for object recognition, tracking and alignment of images. Correlation coefficient is a robust similarity measure for template matching. However, it makes the matching process computationally expensive. In order to reduce the computational cost, elimination algorithms have been used without any degradation in accuracy. In this talk, I will present two novel elimination algorithms to speedup correlation coefficient based template matching. One is based on the transitivity of the correlation coefficient and the second is based on the monotonic growth pattern of the correlation coefficient.

Using the triangular inequality of distance measures, we have derived transitive bounds on the correlation coefficient. Given three images, A, B, and C, if correlations between A, B and B, C are known then the exact bounds on the correlation between images A and C can be computed. These bounds are utilized to skip unfavourable search locations.

In the second approach, we propose the use of a monotonic formulation of correlation coefficient for template matching. With this formulation, the correlation coefficient monotonically decreases from the highest possible value to the actual value when all pixels are processed. As soon as the value of correlation falls below a previouslyknown best value, the remaining computation becomes useless and is therefore, skipped without any loss of accuracy.

About the Presenter

Arif Mahmood completed his PhD in the field of computer vision from The Department of Computer Science, LUMS in June 2011 (http://cvlab.lums.edu.pk/). He has recently joined the School of Computer Science and Software Engineering UWA as Research Assistant Professor. His broad research interests are in the fields of Computer Vision and Image Processing, specifically in image matching and alignment, human recognition using biometric signatures, image morphing, image inpainting and video coding. His complete list of publications may be found on his web page http://suraj.lums.edu.pk/~arifm/

Back to top


A Computer-assisted Framework Based on the Bloom-Anderson Taxonomy for Teaching Mathematics in Early Primary Years

  • Nasrin Moradmand - PhD proposal
  • Friday 17th February
Abstract

The twenty-first century has so far seen rapid developments in the domains of technology. The use of Information and Communication Technologies (ICT) has become very common in all aspects of our daily life. Within education, the development of new Information and Communications Technologies has resulted in significant changes in the way that teachers teach and children learn in school.

In regards to the teaching of mathematics, providing children with mathematical skills are essential and it is important for teachers to learn how to make best use of newtechnologies when they become available. With the world around children moving rapidly into digital media and information, the ways that these mathematical activities are created and delivered have changed. However, not all of the technology-based packages are usable for teaching and learning purposes within schools as they are rarely designed with appropriate pedagogical frameworks in mind. To get the best result from the integration of ICT in education, any application’s design and development need to be based on pedagogically appropriate principles, in terms of both learner’s and teacher’s needs.

The purpose of this research is to design a framework for effective use of ICT in mathematics education for lower primary school children. This framework will be based on current pedagogical views of mathematics education. An outcome of the research will be the development of appropriate software and multimedia content based on the framework. The software and multimedia documents will be used for teaching primary school students and the framework will be further refined based on the experience gained.

Back to top


Detecting Skyline Points: How to Support the Decision Making Process When the Size of Data is Overwhelming

  • Maria Bravo Rojas - Master of Science proposal
  • 11am Friday 13th January
Abstract

The computation of skyline points has become a very interesting topic because of its application in specialized database query processing for decision making. Skyline points provide a quick overview of the most important records in a multi-dimensional database. Nowadays, the complexity of the data delivered for a growing number of applications running on different devices and places increases the necessity of designing new and better methods for retrieving data for efficient decision making. This project aims to propose a method to compute skyline points in high-dimensional databases using distributed and threaded shared memory algorithms. The load distribution model will allow the progressive delivery of the skyline points.

Back to top

 

School of Computer Science and Software Engineering

This Page

Last updated:
Wednesday, 3 August, 2016 2:10 PM

http://web.csse.uwa.edu.au/2777286