Skip to main content

Centre for Intelligent Systems Research

CISR Research Seminar Series - 2013

  CISR presentation
blue star Professional development
yellow star Keynote lecture
purple star External presentation
red star No presentation

Seminars will be held at 12pm in the CISR Breakout Area (except where otherwise indicated)

Date Presenter Presentation/topic
Monday 25th November Ahmad Hossny Resolving Bounded Parametric Uncertainty for Scheduling Heuristic Algorithms 


Uncertainty is a challenge for every decision maker as it increases both of the risk and the cost. Uncertainty has been classified according to its nature to parametric, structural, behavioural and many other classes. Uncertainty problems can be formulated as mathematical, probabilistic, fuzzy or logical. The proposed research discuss how to minimise the impact of the bounded uncertainty on the parameters of the scheduling algorithms, especially the nonlinear and heuristic algorithms. It explains how to use algorithm slicing, Interval arithmetic and interval algebra to minimise the total uncertainty of the objective function even though the input parameters were uncertain. The proposed methodology have been applied to three different scheduling algorithms including Bratley, McNaughton and Hodgson algorithms and they all produced more certain results. The main factors affecting the marginal enhancement is the distance and number of overlaps.

Monday 18th November Mojdeh Nasir Multi-scale Pedestrian Steering Behaviour Modelling within the Built Environment 


Steering and navigation activity through the environment is a crucial practice in our daily lives, as pedestrians. The most influencing factor that is involved in this task is the surrounding physical environment. Investigation towards recognising an appropriate engineering framework that has the potential to incorporate environmental influences into the pedestrian steering behaviour and development of a synthetic approach are the dominant motivation underpinning this study. This research aims to develop a realistic computer simulation model of the pedestrian steering behaviour with the inclusion of effects from spatial layout to predict the pedestrian walking path within built environments during normal and non-panic conditions. A multi-scale approach including both macroscopic (global) and microscopic (local) modelling is proposed.

Monday 11th November Mohammed Hossny Recent Advances in Augmented Reality 


In this talk I will summarise augmented reality papers recently presented in IEEE International Symposium on Mixed and Augmented Reality (ISMAR'13) help in Adelaide. The talk highlights user studies, performance improvements, novel AR marker designs and applications.

Monday 4th November John McCormick Learning to Dance with a Human 


Artificial Neural Networks (ANN) are a popular means of allowing systems to learn about and filter aspects of their domain. In this presentation we will discuss the use of ANN in the context of dance performance. We will also present preliminary findings on combining ANN with Hidden Markov Models (HMM) to achieve recognition of relatively complex full body movement sequences. The network is presented with movement in the form of motion capture streams, both pre-recorded and live. Learning can be viewed as analogous to rehearsal, recognition and response to performance. The interrelationship between the artificial neural network and dancer throughout the process is considered as a potential means of allowing the network to function beyond its limited self-contained capability. The use of human expert knowledge to rapidly enable system capability is commonplace, however when it involves personal information such as a person's movement signature, or identifying biological features,its good for the system, is it good for the human?

Monday 28th October

Mahardhika Pratama External presentation  A Novel Meta-Cognitive-based Scaffolding Learning Machine 


This presentation introduces a novel meta-cognitive learning machine termed GENERIC-Classifier (gClass). Its learning engine can embody mechanisms of human learning by automatically regulating three important issues of what-to-learn, how-to-learn and when-to-learn. The how-to-learn facet is specifically devised according to the scaffolding theory which is a well-known tutoring theory to foster human learning of complex data. The what-to-learn aspect adopts the concept of online active learning by virtue of extended conflict and ignorance paradigm. The when-to-learn component explores the standard sample reverse strategy. The cognitive constituent can be perceived as a generalized version of the Takagi-Sugeno Kang (TSK) fuzzy system where the rule premise is, on the one hand, underpinned by multivariate Gaussian function capable of granting some sorts of rotation effects of ellipsoids. On the other hand, the rule consequent is not reliant upon the linear hyper-plane as ubiquitous TSK fuzzy rules in the literatures. It uses functional-link type of approach that exploits benefits of a non-linear Chebyshev function. More importantly, a holistic learning framework of gClass works on a fully online and local mode allowing to be plugged in the time-critical applications. The comprehensive empirical studies, and the corresponding statistical tests ascertain that gClass can deliver comparable or, in some cases, superior classification rates amid withholding more compact and parsimonious network topologies and number of training samples than its counterparts.


Mahardhika Pratama, was born in Surabaya, Indonesia. He received B.Eng degree (First Class Honor) in Electrical Engineering from the Sepuluh Nopember Institute of Technology, Indonesia, in 2010. At the same time, he was awarded the best and most favorite final project by the same institution. Mr. Pratama holds his Master of Science (M.Sc) degree in Computer Control and Automation (CCA) from Nanyang Technological University, Singapore, in 2011 and achieved prestigious engineering achievement award given by Institute of Engineer, Singapore. Mr.Pratama has been nominated to who is who in the world by Marquis in 2013. He currently pursues a PhD program in University of New South Wales, Australia where he has been granted the high impact publication award. Mr.Pratama is a member of IEEE, IEEE Computational Intelligent Society (CIS) and IEEE System, Man and Cybernetic Society (SMCS), and Indonesian Soft Computing Society (ISC-INA) and is an active reviewer in some top journals such as: IEEE Transactions on System, Man and Cybernetics part-B: Cybernetics, Neurocomputing and Applied Soft Computing. His research interests involve machine learning, computational intelligent, evolutionary computation, fuzzy logic, neural network and evolving adaptive systems.

Monday 21st October Josipa Crnic Professional Development  COS Pivot 


COS Pivot is a powerful tool that provides access to a comprehensive list of global funding and collaboration opportunities

Monday 7th October Bruce Gunn Development of resource based process models from event logs 


One of the difficulties with the development of most resource based models, is the development and understanding of how the process or system operates, and capturing this in a succinct manner within a process model. Process Mining is a methodology developed over the past decade that utilises data event logs from Information Systems, at all process scales, and converts the data into process models. The models, based on Petri-nets, can then be used to assess the validity and conformity of the business processes to the ideal. This approach, while useful and interesting for many IT based business processes, has a number of shortcomings that make it not so attractive for resource constrained processes. In many applications: manufacturing; supply chain; and health care; the provision of goods and services is mostly constrained by the number of key resources, both human and machine, available. In order to make Process Mining capable of producing models that are adequate for resource constrained processes, in dynamic, "unstructured" environments, a number of new steps must be added to the methodology. These issues will be highlighted and a methodology developed that will provide a significantly wider range of application for the Process Mining Tools.

Monday 30th September Chris Rawson Professional Development  Impact factors, journal rankings and the ERA journals lists: what is measured? 


Researchers often use impact factors and journal rankings to assist them to build a profile of their publication output. It is therefore important to ask 'what do the numbers mean?' We outline the methods used by Thomson Reuters and Elsevier to calculate impact factors and journal rankings. We then discuss the introduction and subsequent retraction of the ERA journal lists.

Monday 23rd September Vu Le Complex Simulation of Stockyard Mining Operations 


Conflicts between resources in stockyards causes mining companies millions of dollars a year. An effective planning strategy needs to be established in order to reduce these operational conflicts. In this research a stockyard simulation model of a mining operation is proposed. The simulation uses discrete event and continuous strategies to create a high detail level of visualization and animation that closely resemble actual stockyard operation. The proposed simulation model is tightly integrated with a stockpile planner and it is used to evaluate the feasibility of a given production plan. The high detail visualization of the simulation model allows planner to determine the source of conflict, which can be used to guide the elimination of these conflicts.

Monday 16th September Imali Hettiarachchi Multidisciplinary cognitive neuroscience and my current research 


I often get asked what I am doing in my research, the best description of my field of research is 'cognitive neuroscience'. Cognitive neuroscience is a multidisciplinary field of research, which is the part of neuroscience that is interested in how cognitive functions are produced by the brain. Cognitive neuroscience research normally involves the application of a behavioural task together with the use of a brain imaging technique.

My current research addresses the problem of 'how the brain areas functionally integrate to perform a certain cognitive function' i.e. the information flow within the brain. I use signal processing techniques including independent component analysis, multivariate autoregressive modelling and Kalman filtering to analyse the event related-EEG data to elucidate the high-level cognitive functions such as visual categorisation and visual object recognition. In this talk I will present some new analysis methods I have proposed in my thesis for the EEG-based information flow analysis and some future directions of research that make use of the aforementioned signal processing techniques.

Monday 9th September Marzieh Asgari Formulation and Simulation of a 3D Mechanical Model of Embryos for Microinjection 


The understanding of cell manipulation, for example in microinjection, requires an accurate model of the cells. Motivated by this important requirement, a 3D particle-based mechanical model is derived for simulating the deformation of the fish egg membrane and the corresponding cellular forces during microrobotic cell injection. The model is formulated based on the kinematic and dynamic of spring-damper configuration with multi-particle joints considering the visco-elastic fluidic properties. It simulates the indentation force feedback as well as cell visual deformation during microinjection. A preliminary simulation study is conducted with different parameter configurations. The results indicate that the proposed particle-based model is able to provide similar deformation profiles as observed from a real microinjection experiment of the zebrafish embryo published in the literature. As a generic modelling approach is adopted, the proposed model also has the potential in applications with different types of manipulation such as micropipette cell aspiration.

Monday 2nd September

Matthew Watson Mrs Alving - [Actor/Robot]: unifying dramatic theatre and interactive technologies in the Mixed Reality Performance Laboratory Project ARTLAB 


This presentation will demonstrate and discuss the development of Mrs Alving, the robot designed and built in house for the Mixed Reality Performance Lab (MRPL) Project: ARTLAB. In this talk, we will hear details of the brief, design and deployment of the robotic components to create a robotic actor. Perspectives of the projects development and outcomes will be also be explored with the projects Creative Director Gorkem Acaroglu, the Director of Deakin's Motion Capture Lab Dr. Kim Vincs, and CISR team members.

Monday 2nd September Timos Sellis External presentation  Managing Streaming Spatial Data 


Timos Sellis received his diploma degree in Electrical Engineering in 1982 from the National Technical University of Athens (NTUA), Greece. In 1983 he received the M.Sc. degree from Harvard University and in 1986 the Ph.D. degree from the University of California at Berkeley, both in Computer Science. In 1986, he joined the Department of Computer Science of the University of Maryland, College Park as an Assistant Professor, and became an Associate Professor in 1992. Between 1992 and 1996 he was an Associate Professor at NTUA, where he served as a Professor till January 2013. He is currently a Professor at the School of Computer Science and Information Technology of RMIT University in Australia. Timos was also the Director of a new research institute he founded in Greece, the Institute for the Management of Information Systems (IMIS) of the "Athena" Research Center ( between 2007 and 2012.

His research interests include data streams, peer-to-peer database systems, personalization, the integration of Web and databases, and spatio- temporal database systems. He has published over 200 articles in refereed journals and international conferences in the above areas and has been invited speaker in major international events. He has also participated and co-ordinated several national and european research projects. Prof. Sellis is a recipient of the prestigious Presidential Young Investigator (PYI) award given by the President of USA to the most talented new researchers (1990), and of the VLDB 1997 10 Year Paper Award in 1997 (awarded to the paper published in the proceedings of the VLDB 1987 conference that had the biggest impact in the field of database systems in the decade 1987-97). for his work on spatial databases. He was the president of the National Council for Research and Technology of Greece (2001-2003), and in November 2009, he was awarded the status of IEEE Fellow, for his contributions to database query optimization, and spatial data management.


In many applications nowadays there are requirements for data that "flow" continuously (data streams). Examples include fleet management systems, temperature or other measurement monitoring, even monitoring stock prices. Of particular interest due to the large increase of devices that detect location (e.g. GPS) have applications that manage large volumes of data streams with geospatial information. This talk addresses issues of management of a large number of objects in modern such monitoring applications. In such an environment, the presence of ephemeral streams and dynamically changing data, drastically change the way the processing of so-called "continuous queries" is done, so as to provide answers for the position and trajectory of objects.

Monday 26th August Fuleah Abdul Razzaq Locally Sparsified Compressive Sensing for Improved MR Image Quality 


The fact that medical images have redundant information is exploited by researchers for faster image acquisition. Sample set or number of measurements were reduced in order to achieve rapid imaging. However, due to inadequate sampling, noise artefacts are inevitable in Compressive Sensing (CS) MRI. CS utilizes the transform sparsity of MR images to regenerate images from under-sampled data. Locally sparsified Compressed Sensing is an extension of simple CS. It localises sparsity constraints for sub-regions rather than using a global constraint. I will present a framework to use local CS for improving image quality without increasing sampling rate or without making the acquisition process any slower. This was achieved by exploiting local constraints. Localising image into independent sub-regions allows different sampling rates within image. Energy distribution of MR images is not even and most of noise occurs due to under-sampling in high energy regions. By sampling sub-regions based on energy distribution, noise artefacts can be minimized. I will show experimental results and their comparison with global CS.

Monday 19th August Prof. Saeid Nahavandi CISR Research Trends 
Tuesday 13th August

Prof. Duncan McFarlane External presentation  Product Intelligence: Theory and Practice 


Duncan McFarlane is Professor of Industrial Information Engineering at the Cambridge University Engineering Department, and head of the Distributed Information & Automation Laboratory within the Institute for Manufacturing. He has been involved in the design and operation of industrial automation and information systems for twenty years. His research work is focused in the areas of distributed industrial automation, reconfigurable systems, RFID integration, track and trace systems and valuing industrial information. Most recently he has been examining the role of automation and information solutions in supporting services and infrastructure and in addressing environmental concerns. Between 2000-03 Prof McFarlane was the European Research Director of the Auto-ID Center . and between 2003-2006 head of the Cambridge Auto ID Lab and co founded a series of programmes on information in the aerospace sector including the Aero ID Programme, examining the role of RFID in the aerospace industry. Professor McFarlane is also Co-Founder and Chairman of RedBite Solutions Ltd - an industrial RFID and track & trace solutions company. Between 2006-11 he was Professorship of Service and Support Engineering which was supported by the Royal Academy of Engineering and BAE Systems. In October 2010, he was appointed Professor of Industrial Information Engineering. He is currently a co-investigator on the Cambridge Centre for Smart Infrastructure and Construction.


Are there any benefits in allowing orders and products to be able to manage their own progress through a supply chain? The notion of associating (and even embedding) information management and reasoning capabilities with a physical product has been discussed for over ten years now. This talk will review the notions of product intelligence and examine the rationales for these models and the practicality of their implementation. A number of trial deployments in manufacturing, logistics and aerospace will be examined and the practical pros and cons of the intelligent product model assessed.

Monday 5th August Roman Spanek External presentation  Semantic Web, Ontologies, Sensor Networks, Security, Trust, Dynamic environments... a brief introduction 


R. Spanek obtained his Master of Science in Automatic Control and Engineering Informatics in 2003 and PhD. in Security in Distributed Environments in 2008 both at the Technical University of Liberec, the Czech Republic. He joined Institute of Computer Science Academy of Sciences of the Czech Republic v.v.i in 2004 as a research fellow and he was accepted as Academic Staff of Department of Software Engineering of Technical University of Liberec one year later. Roman Spanek's main research interests cover issues of maintaining security in dynamic and distributed systems, trust management systems and their application. He has published over 25 research articles mostly at international conferences (including Springer, IEEE and ACM).


A research in computer science paradigm has been beside the others driven by concrete needs of users. Due to different needs coming from different areas computer science has had to cope with various different topics utilizing ideas and technique from many research areas.

From that vast number of topics I would like to introduce several I am personally interested in. Starting with the Semantic web paradigm, which brings a way how to describe different data to the computer so the computer is able to process it automatically; followed by the ontologies, allowing to represent knowledge in the computer understandable way.

As a practical example of a computer system, I would like to introduce a project taking care for automated measuring and transmission of data from a geographically remote laboratory in the Jizera Mountains.

The last part of my presentation will focus on main research topics of my group - security in dynamic distributed environments, where traditional techniques well known from the server-client architecture do not work well. We will speak about maintaining security by building and managing trust between entities; providing means of establishment of initial level of trust between entities. Since the world is not perfect, we shall mention possible known attacks on such systems and how to prevent some of them also.

Monday 29th July Ti-Chung Lee External presentation  Stability and its applications: Some basic concepts and recent developments 


T. C. Lee received the M.S. degree in mathematics and the Ph.D. degree in electrical engineering from the National Tsing Hua University, Hsinchu, Taiwan, in 1990 and 1995, respectively. In August 1997, he joined Minghsin University of Science and Technology at Hsinchu as an Assistant Professor of Electrical Engineering, and since 2005, he has been a Professor. His main research interests are stability theory, tracking control of nonholonomic systems, and robot control. He has published over 30 research articles including regular papers in IEEE Trans. on Automatic Control.


Stability is a basic requirement for control engineering systems. Based on the concept of error model, stability plays a more and more important role in the applications of control systems. Examples include robotic systems, general mechanic systems, power electronic systems, servo motor control and communications systems etc. This talk tries to give a rough introduction to the recent development of stability theory and its applications. Some possible research directions are also proposed.

Monday 22nd July Kianoush Emami External presentation  A Functional Observer Based Fault Detection Technique for Dynamical Systems 


Kianoush Emami obtained his Bachelor of Science in Electrical-Control Engineering, and Master of Science in Electrical Engineering from Ferdowsi University of Mashhad, Iran in 1999 and 2002 respectively. From 2002 to 2005 he was the Control and Instrument Maintenance Supervisor for NISOC (National Oil Company), and during 2005-2006 he was an Electrical Engineer at Tous Stadt Consulting Engineers Company. For part of 2006 he was also an Electrical Engineer at Khorasan Power Engineering Consulting Company (Moniran). From 2006-2010 he served as a lecturer in the department of Electrical and Computer Engineering at Imamreza Internation University, Mashhad, Iran. He is currently pursuing his PhD at the University of Western Australia which is supported by an Australian postgraduate research scholarship. He has also undertaken tutorial and lab demonstration duties for undergraduate students in the School of Electrical, Electronic and Computer Engineering at the University of Western Australia.


Fault detection is an important area of study because many processes, if not all, are subject to faults at some point in their lifetime. Some of these system faults that can occur may be catastrophic, for instance, according to the US office of the Secretary of Defence, about 80 per cent of flight incidents concerning unmanned aerial vehicles are due to faults occurring in the actuators, sensors or faults due to changes in inner parameters of the system dynamics. Fault detection is not only important in aerospace applications but also in many other applications such as automobiles, trains, chemical and process systems, power generation applications, to state a few. In such applications, especially when safety is of paramount importance, detecting faults in a timely manner and then taking corrective action to mitigate the fault is the key to avoiding unwanted consequences. The subject of taking corrective action once a fault is detected is a separate area of study in itself, and in the literature it is often referred to as Fault Tolerant Control (FTC). The focus of this talk is not on FTC, rather on the detection of faults, in particular faults occurring in actuators and faults due to changes in inner parameters of the system dynamics.

In this talk a functional observer based fault detection method will be presented. The fault detection is achieved by using a functional observer based fault indicator that asymptotically converges to a fault indicator that can be derived based on the nominal system. The asymptotic value of the proposed fault indicator is independent of the functional observer parameters and also the convergence rate of the fault indicator can be altered by choosing appropriate functional observer parameters. The advantage of using this new method is that the observed system is not necessarily needed to be observable; therefore, the proposed fault detection technique is also applicable for systems where state observers cannot be designed; moreover, the functional observer fault detection scheme is always of reduced order in comparison to a state observer based scheme.

Monday 15th July Khashayar Khoshmanesh Professional Development  Paper Writing Workshop 
Monday 8th July Kyle Nelson Novel View Synthesis through Robust Inverse Tensor Transfer 


Novel view synthesis refers to the task of generating a new image of a 3-dimensional scene from a novel viewpoint, utilising a small set of real input views. Image-based rendering techniques rely on multi-view geometry to generate novel views by directly transferring pixel information from input images, bypassing the need to reconstruct and reproject the 3D scene as is the case in traditional model-based view synthesis methods. This presentation will discuss an image-based robust inverse tensor transfer technique for novel view synthesis. The proposed method utilises the trifocal tensor encapsulating the geometry relating three images to transfer each point in a novel view to the original input frames and retrieve pixel information. Rather than relying on a pre-computed depth map between images for view synthesis, inverse tensor transfer proposes sets of geometrically compliant points in the input views and selects the most likely correspondence based on consistency. The proposed robust inverse tensor transfer algorithm introduces a multi-stage approach which most notably includes a robust variance measure derived from DAISY descriptors, a depth-guided second stage and a compulsory third stage employing dynamic programming to find the optimal solution along each epipolar line in the novel view. The result is a technique which produces realistic and geometrically consistent novel views from only a few uncalibrated input images and achieves significant robustness and quality improvements with respect to existing methods.

Monday 1st July Chris Rawson Professional Development  Scholarly resources for researchers in computer and robotic engineering 


The Deakin University Library plays a key role in providing the scholarly literature necessary to assist researchers to keep comprehensively up to date in their area of research. The library has developed its collection to include a large volume of high quality, relevant and scholarly resources. It is important for researchers to be aware of the full breadth of material that is available to them.

This seminar presentation will identify a number of resources provided by the library that are likely to be relevant to computer scientists and robotic engineers, and discuss some of the means by which they can by systematically and comprehensively searched.

Monday 24th June Hussein Haggag Tracking Dynamically Changing Skeleton 


Biomechanics and motion tracking and analysis fields are closely associated with various areas such as medicine, sports, ergonomics and many other areas. Currently there are several methods and tools for tracking the human body and analysing its movements in three dimensional spaces. The effectiveness of these methods and tools were analysed and it was found that some of them was expensive, others are marker-based and others lacked portability which makes them difficult to work with in some situations such as limited space environments. The recent developed depth sensors such as Microsoft's Kinect depth sensor and the Asus Xtion depth sensor have attracted significant attention (hundreds of scientific publications) to the question of how to take advantage of this technology in order to achieve accurate motion tracking and action detection in marker-less approaches. The functionality and 3D motion capture of these recently developed depth sensors makes it easier to perform these tasks. Having an affordable price, comparable accuracy to that of a golden standard camera and smaller size and improved portability, makes depth sensors a new tool used to track three dimensional motion of the human body movements.

Marker-less motion tracking depth sensors have many potential applications. Work safety in Small and Medium Enterprises (SMEs) is one of these applications, where in many assembly operations there are repetitive motions, uncomfortable postures, and other ergonomic hazards. Ergonomic assessments contribute to increasing the productivity and performance of organizations by reducing the rate of work injuries and working towards preventing them. The depth sensor is used due to its portability, relatively low cost compared to other motion tracking sensors, and rapid automatic calibration. Another application is interactive living room, where the skeleton tracking technologies are used to revolutionise the living room experience. Depth sensors are used to increase the ergonomic awareness of the living room audience using audio, visual and tactile feedback. Also, depth sensors are used with smart TVs to skip inappropriate content when young audience are present.

One of the current problems in using the marker-less depth sensors such as the Microsoft Kinect or Asus Xtion is the incapability of tracking dynamically changing skeletons, where the tracked skeleton have an expandable number of joints that depends on the objects captured by the depth sensor.

Monday 17th June Lei Wei Haptically-enabled needle thoracostomy training simulator with multi-layered deformable models 


Pneumothorax is a medical condition where excessive air exists in plural cavity and exerts pressure towards lungs. If not treated by medical doctors properly and timely, a spontaneous pneumothorax will turn into a tension pneumothorax, which is an emergency medical condition and may leads to respiratory failure and even death. Tension pneumothorax requires immediate operation, which is needle thoracostomy. Needle thoracostomy is fast and straightforward, yet there are still some variations, risks and emergency situations. Existing needle thoracostomy training is generally conducted on anaesthetised animals, cadavers, and mannequins, which are expensive, have hygienic and ethical issues and could not provide enough realism towards the procedure. There are some other medical training simulation systems but they only rely on visual rendering and provide even worse immersion. In this presentation, we introduce a dedicated needle thoracostomy training simulator. It aims at providing an immersive virtual environment through the integration of haptic interaction and multi-layered deformable models. Two variations of the operation has been designed and implemented under close collaboration with medical doctors to ensure accuracy. Adjustable parameters have also been incorporated to simulate varieties of patients. A few algorithms and technical issues have also been described and discussed.

Monday 10th June Sahar Araghi Intelligent Traffic Signal Timing Control Using Machine Learning Methods 


Traffic congestion is one of the major problems in modern cities. During this presentation a history of using traffic signal lights and its controlling methods will be provided. Then, the problem of adjusting appropriate green times for traffic lights by the aim of minimising congestion will be discussed. Q-learning and neural network are two machine learning methods applied for controlling traffic lights and minimising the total delay as the objective function. It is assumed that an intersection behaves similar to an intelligent agent learning how to set green times in each cycle based on the traffic parameters. Comparative results for Q-learning and neural network are presented for an isolated intersection.

Monday 3rd June Jin Wang Probabilistic Topic Models: Extensions and Applications 


Due to the rapid development of information techniques, a large amount of digital information, including books, scientific articles, news, blogs, webpages, images, sound, videos, and social networks, are produced every day. Effectively and efficiently organizing, searching, and understanding these vast amount of digital information is a challenging task. Computational tools that can help human organize, search, and understand these digital information are very valuable. Probabilistic Topic Models (PCM) are developed to discover and annotate large archives of documents with thematic information. The PCM is able to discovery the underline topics in a collection of documents, which makes organizing, searching, and understanding much easier. Recently, the PCM has been extended into many other domains such as computer vision, biomedical time series analysis, bioinformatics, etc. In this presentation, the principle of Probabilistic Topic Models will be introduced. Then, some extensions based on the classical topic model, i.e., Latent Dirichlet Allocation (LDA), will be explained. Finally, some examples of the PCM applications in computer vision and biomedical time series analysis will be illustrated.

Monday 27th May

Prof. Toshio Fukuda External presentation  Micro-surgery and simulator based medicine 


Toshio Fukuda received the B.A. degree from Waseda University, Tokyo, Japan, in 1971, and the M.S and Dr. Eng. from the University of Tokyo, Tokyo, Japan, in 1973 and 1977, respectively.

In 1977, he joined the National Mechanical Engineering Laboratory. In 1982, he joined the Science University of Tokyo, Japan, and then joined Nagoya University, Nagoya, Japan, in 1989. He was Director of Center for Micro-Nano Mechatronics and Professor of Department of Micro-Nano Systems Engineering at Nagoya University, where he was mainly involved in the research fields of intelligent robotic and mechatronic system, cellular robotic system, and micro- and nano-robotic system. He was the Russell Springer Chaired Professor at UC Berkeley, Distinguished Professor, Seoul National University, and many other universities. Currently, He is Professor Emeritus Nagoya University, Visiting Professor Institute for Advanced Research Nagoya University, Professor Meijo University, Professor Beijin Institute of Technology.

Dr. Fukuda is IEEE Region 10 Director (2013-2014) and served President of IEEE Robotics and Automation Society (1998-1999), Director of the IEEE Division X, Systems and Control (2001- 2002), and Editor-in-Chief of IEEE / ASME Transactions on Mechatronics (2000-2002). He was President of IEEE Nanotechnology Council (2002-2003, 2005) and President of SOFT (Japan Society for Fuzzy Theory and Intelligent Informatics) (2003-2005). He was elected as a member of Science Council of Japan (2008-). He received the IEEE Eugene Mittelmann Award (1997), IEEE Millennium Medal (2000), Humboldt Research Prize (2002), IEEE Robotics and Automation Pioneer Award (2004), IEEE Robotics and Automation Society Distinguished Service Award (2005), Award from Ministry of Education and Science in Japan (2005). IEEE Nanotechnology Council Distinguished service award (2007). Best Googol Application paper awards from IEEE Trans. Automation Science and Engineering (2007). Best papers awards from RSJ (2004) and SICE (2007), Special Funai Award from JSME (2008), 2009 George Saridis Leadership Award in Robotics and Automation (2009), IEEE Robotics and Automation Technical Field Award (2010), ROBOMECH Award 2010 (2010), The Society of Instrument and Control Engineers Technical Field Award (2010), IROS Harashima Award for Innovative Technologies (2011), Friendship Award of Liaoning Province PR China (2012), Distinguished Service Award, The Robotics Society of Japan (2010), World Automation Congress 2010 (WAC 2010) dedicated to Prof. Toshio Fukuda, Best Paper Award in 2010 International Symposium on Micro-Nano Mechatronics and Human Science (MHS2010), IEEE Fellow (1995), SICE Fellow (1995), JSME Fellow (2001), RSJ Fellow (2004), Honorary Doctor of Aalto University School of Science and Technology (2010).


There have been so many micro robotic surgery systems developed so far, one of which is the Da Vincci Robotic System that is the most successful in the business market. There are some others but have some difficulties in the market and also to obtain the FDA approval.

We have been developing an endovascular micro surgery system and also an evaluation simulation system, whether the surgery performance is good for human doctor and/or robotics system. This simulator is made by the micro technology using the CT data of patients in the brain and other organs. It turned out to be very efficient and useful for evaluating the skill of medical doctors and also important to develop different catheter devices as well as stents and flow diverters. It can also be used for medical applications to make the aneurysm developing process scientifically clear.

Monday 20th May Mats Isaksson An introduction to robot force control 


This presentation provides a short introduction to robot force control and presents several case studies where such functionality is useful. As CISR has recently acquired an IRB 120 robot featuring the force control option, a short programming example demonstrating the ease of use of this functionality is provided. Although robotics force-control is not new, the number of industrial installations is limited and CISR has a possibility to help introducing this functionality to local industry. The main objective of this presentation is brain storming ideas for utilising robot force control, both in order to support industry partners and in pure research projects.

Monday 13th May Hailing Zhou Towards object based image editing 


With the increasing use of images in web design, document processing, entertainment, medical analysis, virtual environment creation, etc., the demand is dramatically growing for effective editing techniques that can fast and accurately create, compose, render and manipulate image contents. In recent years, plenty of research has been conducted for these tasks. However, the current techniques for these tasks are still far away from being satisfactory. It usually needs extensive user guidance, with painstaking time and effort, to produce a desired result. Our research thus investigates new techniques and tools for effective creation, extraction, composition, and other manipulations of image contents. We introduce an object oriented and vector based image representation. With the representation, we perceive an image as a set of objects represented by vector graphics so that image editing can be performed easily and semantically.

Monday 6th May Chintha Handapangoda Generalized coupled photon transport model for correlated photon streams with distinct frequencies 


The study of light propagation through a turbid medium that involves the coupling of multiple frequencies has a number of applications in many different disciplines, such as fluorescence spectroscopy, phosphorescence imaging and laser Doppler flowmetry. Low-power laser radiation induces tissue fluorescence without tissue damage and thus considered to be a very versatile tool in the diagnostic fluorescence spectroscopy with many applications in medicine. Phosphorescence lifetime imaging has become a widely used technique for tomographic oxygen imaging. Laser Doppler flowmetry is routinely used to measure blood flow rate.

Conventional models used in fluorescence spectroscopy and phosphorescence imaging are based on the diffusion approximation of the photon transport theory. Existing formalisms phenomenologically arrive at the diffusion equations without systematically considering the simultaneous conservation of energy in both the excitation and inelastic scattered beams. We proposed a generalized coupled photon transport model that can handle correlated photon streams with distinct frequencies. The diffusion models for fluorescence spectroscopy and phosphorescence imaging derived using these more accurate photon transport models resulted in an additional significant diffusion term that had been ignored in the conventional models.

Monday 29th April Abbas Khosravi Wind Power Forecasting - Challenges and Recent Advances 


Wind power generation has experienced a rapid growth in recent years, in particular in Europe and Asia pacific. In fact, it is the world's fastest-growing source of renewable energies. Wind farms provide clean and emission-free energy, and, therefore, can greatly contribute to offsetting the effects of climate change. Integration and interconnection of wind farms into grids requires the availability of accurate short term forecasts. However, it is already known that wind power forecast errors always exist and cannot be eliminated, even using the best forecasting tools.

This presentation provides a short summary of wind power generation in Australia. This is followed by a discussion about industry's need for accurate wind power forecasting. State-of-the-art methods for wind power forecasting are then briefly reviewed. Finally, application of prediction intervals for quantification of uncertainties associated with forecasts is demonstrated.

Monday 22nd April Sara Keretna Medical Text Mining - Extraction Methodologies and Techniques for Drug Name Extraction 


Text mining is a challenging field of research due to the unstructured nature of the data being processed. Increased usage of Electronic Health Records (EHS) in hospitals is making it possible to explore text mining in the medical domain. This talk focuses on drug name extraction, an activity in medical text mining that attempts to extract drug names from raw text. Drug name extraction is a crucial task that is essential for building a complete knowledge base for patients. It is frequently achieved by lexicon-based techniques combined with heuristics. However, these techniques face the difficulty of maintaining an up-to-date and complete lexicon. Methodologies to detect drug names from unstructured medical text that overcome the limitations of the existing techniques are discussed.

Monday 15th April Mohammed Hossny 3D Facial and Multimodal Biometric Technologies 


Biometrics is a key technology that can help identify friend from foe and deny the enemy anonymity he needs to hide and strike at will. Biometrics is defined as a measurable biological (anatomical and physiological) and behaviour characteristics that can be used for automated recognition. This research demonstrates the facial detection and recognition capabilities using a Kinect-based system. Extraction of facial features and provision of feature-based fusion through 3D point clouds is evaluated for target identification.

DSTO has undertaken a work program for moving target tracking and enhancing the quality of images taken from distances of over 1km. The future objective is to integrate these capabilities with 3D biometric recognition algorithms to deliver a product that has the capability of classifying, identifying and tracking the target of interest unobtrusively in real time.

The preliminary results show a promising trend for low cost solutions that can be populated in crowded facilities such as malls and airports.

Monday 8th April Prof. Saeid Nahavandi Professional Development  Research Paper Writing Skills 
Monday 25th March Thanh Thi Nguyen Fuzzy Portfolio Allocation Models through a New Risk Measure and Fuzzy Sharpe Ratio 


Stock returns are modelled by fuzzy random variables and their covariances are calculated using two approaches. The first approach deploys the strongest t-norm TM fuzzy arithmetic and the covariances of fuzzy random variables are measured by crisp numbers. In the second approach, we employ the weakest t-norm TW fuzzy arithmetic and covariances of fuzzy random variables are computed in fuzzy numbers. Along with the fuzzy modelling of stock returns, portfolio returns are thus represented by fuzzy numbers. A new portfolio risk measure that is the uncertainty of portfolio fuzzy return is introduced in this paper. Beyond the well-known Sharpe ratio (the "reward-to-variability" ratio) in modern portfolio theory, we initiate the so-called "fuzzy Sharpe ratio" in the fuzzy modelling context. In addition to the introduction of the new risk measure, we also put forward the "reward-to-uncertainty" ratio to assess the portfolio performance in fuzzy modelling. Corresponding to two approaches based on TM and TW fuzzy arithmetic, two portfolio optimization models are formulated in which the uncertainty of portfolio fuzzy returns is minimized whilst the fuzzy Sharpe ratio is maximized. These models are solved by fuzzy approach or by genetic algorithm (GA). Solutions of the two proposed models are shown dominant in terms of portfolio return uncertainty compared to those of the conventional mean-variance optimization (MVO) model used prevalently in the financial literature. In terms of portfolio performance evaluated by the fuzzy Sharpe ratio and the "reward-to-uncertainty" ratio, the model using TW fuzzy arithmetic results in higher performance portfolios than those obtained by both the MVO and the fuzzy model, which employs the TM fuzzy arithmetic. We also found that using fuzzy approach to solving multi-objective problems appears to achieve more optimal solutions than using GA, although GA can offer a series of well diversified portfolio solutions diagrammed in a Pareto frontier.

Monday 18th March Kevin Netto External presentation  Hard work never killed anyone, it just hurt them...badly 


Dr Netto leads a stream of research investigating the biomechanics of hard work. He is interested in musculoskeletal load when stressors such as heat, shock, vibration or movement restriction are experienced during demanding occupational pursuits. He investigates the effect military body armour has on movement. He collaborates with the Bushfire Cooperative Research Centre (CRC) investigating the occupational health and safety of volunteer bushfire fighters in Australia. He works with Australian Defence Force investigating vibration and shock loads soldiers are exposed to during overland transports as well as neck injuries sustained by combat pilots. Dr Netto frequently advices local council, industry and government departments about the musculoskeletal demand of physically challenging work to better facilitate a reduction in injury and an increase in productivity.


Workplace injury costs Australia approximately 5% of gross domestic produce (GDP). In real dollar terms, this equates to AUD$ 8 billion in direct compensation and AUD$60 billion in indirect costs. Industries such as manufacturing, health and community services and construction reported the highest injury. Body stress as well as slips, trip and falls accounted for more than 50% of the cost of all injury. In 2008-09, 611300 cases of workplace injury were reported and more than half of these were recurrent.

The broader field of occupation health and safety has identified hard or physically demanding work as a main driver of injury in the workplace. Specific tasks such as lifting, lift and carry, push and pull have been identified as injurious. Further, work involving the use of specialist clothing, extra loads and vibration have also been linked with injury. From a biomechanical perspective, mechanisms of injury are better understood through the use of physical monitoring and in-vivo modelling.

Current and future research work in this area is concentrating on understanding not only singular tasks but also work flow patterns and how these affect human performance. Further, advances in rehabilitation techniques are trying to reduce the alarming number of recurrent injury. These techniques have also paved the way for prehabilitation and prevention strategies with better training or workers and the use of assistive devices. Continued research in this area ensures a concerted effort in reducing the cost burden of workplace injury.

Monday 11th March Anwar Hosen Control of polystyrene batch reactor using fuzzy logic controller 


Control of polymerization reactors is a challenging issue for researchers due to the complex reaction mechanisms. A lot of reactions occur simultaneously during polymerization. This leads to a polymerization system that is highly nonlinear in nature. In this work, a nonlinear advanced controller, named fuzzy logic controller (FLC), is developed for monitoring the batch free radical polymerization of polystyrene (PS) reactor. Temperature is used as intermediate control variable to control polymer quality, because the products quality and quantity of polymer are directly depends on temperature. Different FLCs are developed through changing the number of fuzzy membership functions (MFs) for inputs and output. The final tuned FLC results are compared with the results of another advanced controller, named neural network based model predictive controller (NN-MPC). The simulation results reveal that the FLC performance is better than NN-MPC in terms of quantitative and qualitative performance criterion.

Monday 4th March Zoran Najdovski Investigating the Biomechanics of C. elegans within a microfluidic environment 


This work is presented in two main parts and outlines the work completed at Nagoya University, Japan 2012 as part of my Australian Endeavour Fellowship for Japan. The first part investigated the biomechanics of C. elegans through the analysis of birefringence within its body. The second stage focused on the development of a microfluidic system that would aid in the analysis of this birefringence.

Stage 1: The visible birefringence within the C. elegans' borders is caused by changes of stress within its body. We investigated how machine vision and microfluidic devices can be used to quantify this stress.

Stage 2: This stage focused on the development of a microfluidic system that would aid in the analysis of the following:

  • To measure and quantify the force profile of an adult C. elegans throughout its natural motion.
  • To estimate the C. elegans birefringent body border thickness to quantify the force/stress profile.
  • To quantify the photoelastic coefficient of C. elegans.
Monday 25th February Mohsen Moradi Dalvand Effects of Force Feedback in a Robotic Assisted Minimally Invasive Surgery System (PRAMiSS) 


In this research, a robotic assisted system (PRAMiSS) for minimally invasive surgery operations is introduced that has haptic feedback capabilities directly from surgery site. To measure the tip/tissue interaction forces, an automated modular laparoscopic instrument with force feedback capabilities was proposed that is able to quickly and easily change between varieties of tip types. Four sets of experiments using only vision, only force, simultaneous force and vision feedbacks and direct manipulation were conducted to evaluate the role of sensory feedback from sideways tip/tissue interaction forces in characterising tissues of varying stiffness. 20 human subjects were involved in experiments for at least 1440 trials. Single factor analysis of variance (ANOVA) and Tukey HSD methods were employed to statistically analyse the experimental results. The experimental data from the characterization results and also the number of extra tries for each trial as well as statistical analysis are presented and discussed in this paper. Results confirm that providing both vision and force feedback leads to better tissue characterization than providing only vision feedback or only force feedback and also increases the certainty compared with direct palpation.

Wednesday 20th February Clint Heinze External presentation  Challenges in Simulating Air Combat Operations 


Clint Heinze, from the Defence Science and Technology Organisation, is currently seconded to the Defence Science Institute as Associate Director.

Dr Heinze has a degree in Aerospace Engineering from RMIT and a PhD in artificial intelligence from the University of Melbourne. Prior to his appointment to the DSI, he was Head of Computational Sciences in the Air Operations Division of DSTO. In this role he led the group providing modelling and simulation support to the study of military air operations and undertaking broad research into the computational sciences.

Dr Heinze's personal research has focussed on the application of artificial intelligence to military simulation which compliments his various professional roles that have for the last twenty-four years centred on air combat aircraft, systems and operations specifically and the study of air power more generally. Most recently he has overseen operations research support to the acquisition of Joint Strike Fighter and EA-18G Growler and the in-service support to Hornet and Super Hornet.


Air combat operations are made complex by the technology involved in military systems, by the networking and connectedness of combat aircraft and their dependence on the electro-magnetic spectrum and by less tangible human factors at the social/organisation, physiological and psychological levels. Employing simulation to evaluate and assess future capability creates even more challenges. When looking forward into possible futures (decades away) these complexities become wrapped in uncertainties that combine to create significant modelling challenges. This brief presentation will discuss some of the challenges facing those who seek to simulate future air combat operations and will give indications of the steps being taken in the Air Operations Division of DSTO to address these issues.

Monday 18th February Khashayar Khoshmanesh Professional Development  Paper Structure 


This presentation, aims to share some experiences in writing journal papers with early career researchers, and specially the newly joined PhD students. A journal paper is broken down into 12 sections and the configuration and aim of each section will be discussed. In particular, the parts that reviewers of journals are looking for are emphasized.

Monday 11th February Husaini Aza Mohd Adam Evaluating Vibration for Visualization using Haptics 


Vibration is a haptic sensation that is useful in visualization. A vibrotactile display is a device used to deliver messages through vibration to the user's skin. Vibration is generated by using high frequency and low amplitude. This creates a sensation to the skin called vibration perception. This research focuses on using sinusoidal vibration, in which amplitude and frequency are used to generate vibration by a force-feedback device. Psychophysical experiments are conducted to test the sensitivity of the device. The goal of the experiment is to identify the device's suitable range of vibration. The range of vibration is essential to recognised different stimuli.

Monday 4th February Wael Abdelrahman Applying collaborative/collective intelligence 


In this talk, I will address the collaborative/collective intelligent algorithms. These can be found in platforms such as multi agent systems (MAS) and ant colony. Such algorithms can be useful in many problems where the solution needs to be reached in a decentralized manner and with a high level of autonomy. The focus in this talk will be on how to apply such efficient models in solving a realistic problem of multi autonomous individuals that need to take real-time decisions and have an efficient governing logic. Besides, the problem has another constraint which require the environment variables to be variable. This includes the end goal, the individual attributes, the environment components, and even whether the individual control will be automatic all the time or it will get some manual aids. An open ended architecture is the only solution that can handle such a problem. Here comes the aforementioned intelligent algorithms handy with their good level of adaptiveness and modelling flexibility.


Deakin University acknowledges the traditional land owners of present campus sites.

27th February 2015