Centre for Intelligent Systems Research

Defence Systems Integration and Robotics Lab
(co-funded by Department of Defence)

Haptically Enabled tele-operative systems for IED render safe

This research investigates developing state-of-the-art technologies specifically for remote render-safe of Improvised Explosive Devices (IED) through a CTD contract being awarded by DoD. Utilising Haptics (force feedback technology), stereovision (binocular video stream for depth perception) and intuitive user controls, the robots have been engineered to deliver maximum effectiveness while allowing minimal training liability.

In Victoria, CISR's OzBot series of mobile platforms have been used by the Victorian Police in a first-responder capacity, exploiting the 30 sec system boot-up and man-portable design to get eyes-on-target at the soonest possible moment. The research focuses on how to reduce operator fatigue, minimise training liability and maintenance, developing simulation technologies for increased training availability and development of mobile platforms with increased range, payload, manipulator reach and capability.

OzBot



Gas Detection with Artificial Intelligence For a Military Robot

Pattern recognition with Artificial Neural Networks (ANNs) is well researched and applying that to gas detection using an ANN which receives input from an array of gas sensors is what is being implemented on the Ozbot. The various sensors feeding the ANN allow the processor to determine gas type and to quantify the gas. The design is based around a low cost module with a purpose built circuit board. Implementation of the ANN is in hardware of the Field Programmable Gate Array (FPGA) and the processor is a logicware based NIOS processor in the Altera FPGA. This custom processor allows a 24 digital data bus connection to the analog to digital converter (ADC) whose sensor signal conditioning circuits connect to the ADC and remove any spurious interfering signals. The gas sensing module uses an array of sensors to discriminate the gas type and to determine the concentration of the detected gas. Training of the ANN is performed under controlled conditions where a known dataset (concentration of target gas) was applied to the module. The ANN in hardware needs to use an approximation function for the transfer function of the algorithm. The function is tested with known datasets and compared for an acceptable error margin for accurate determination of gas type and concentration.

Sensor values

back to top

 

Autonomous Data Fusion for Enhanced Situational Awareness

Data fusion algorithms are becoming vital tools for situational awareness in domains where decision-making is dependent on combined information originated from multiple sources or sensors. A typical scenario could be a swarm of cooperative mobile robots or sensors operating in a hazardous environment, such as an automated battlefield, disaster area, or large integrated systems such as fighter aircraft. In all aforementioned scenarios, there must be some scale defining how good or bad the sensed signals are before sending them to the decision node. In general, maintaining autonomy in fusion systems requires clear identification of the fusion objective function, fusion errors, worthiness of signals, and the maximum amount of information to be squeezed into certain spatial dimensions.

By definition, a fusion algorithm aims to transfer informative features from source signals into the fused outcome. Due to the technological limitations, such as bandwidth, processing capacity, battery life, storage, and response time, fusing all features from both source signals into the fused one is not possible. Therefore, the objective of signal fusion can be redefined into; 1( transferring important features from source signals and 2( ignoring the non-important features and minimizing their effect on the fused signal. According to these objectives, Type-I fusion error, also known as false negative, is an estimation of a number of important features that have not been identified as fusion worthy. This is the type of error that all fusion performance metrics have been measuring so far. On the other hand Type-II fusion error, also known as false positive, is the error of fusing a feature that is not fusion worthy. This kind of error is also known as fusion artifacts. In order to measure these two types of errors, a perfectly fused test case should be developed as a control case. The perfect fusion needs to be carried out on two different source signals where we certainly know what the results should be. The perfect fusion requires identifying control cases, namely 0- and ∞- signals, for both Type-I and Type-II errors.

Typical image fusion results and errors

Typical image fusion results and errors. Test images were obtained from TNO's image fusion test cases

back to top

 

Augmented Collision Detection using Stereo Imagery in an Unstructured Environment

Stereoscopic teleoperation of remotely manipulated robots is a well defined and researched field. However, the stereo cameras necessitated by this system can be utilised for much more than simply providing stereoscopic video for the operator. The stereo cameras can be used for a SLAM system to enable robot localisation and navigation, and allow full 3D mapping and surface reconstruction of the environment. By utilising these, a virtual model of the robot's environment can be built up and, by integrating this model with a full kinematic model of the robot, be used for simulation and task planning purposes. Potential applications of this are to allow the operator to view the virtual environment on the stereo display and thereby:

  • get different viewpoints of the robot and it's environment than those currently offered by the cameras
  • revisit previously traversed areas of the environment
  • conduct task pre-planning and simulation with the robot model before actually attempting the task.

The simulation of the robot in the virtual environment would also enable the prediction of possible collisions and undesired interactions with the environment, therefore allowing the operator to be alerted to potential dangers before they occur. This research is focusing on the development of this novel framework that, using stereo cameras already present for stereoscopic teleoperation, can facilitate these applications.

Framework for mapping and simulation

back to top

 

Competitive Bidding Strategies for Controlling Autonomous Mobile Elements

Controlling autonomous mobile elements is a key question for many application domains. The main objective is providing effective coordination between the team members in order to fulfil the mission objectives. Deploying mobile elements for handling the data collection operation in wireless sensor networks (WNS) has proven its effectiveness in minimising the energy consumption of the sensors' and prolonging the network lifetime. Traditional mechanisms are based on either a spatial or temporal distribution, which lack the utilisation of the best team member to carry out the current task. On the contrary, the competitive scheme presented in this work employs bidding strategy based on single-item lowest price sealed-bid auction. Each mobile element competes for winning the current task by submitting a bid relevant to its ability for servicing the task. The mobile element with the minimum bid is considered the winner and is granted the task. The results demonstrated clearly show that the proposed control scheme outperforms the traditional schemes in minimising the data collection time as well as the distance travelled by each mobile element.

Data collection time Distance ratio

back to top

 

Optimal Fault Tolerant Robotic Manipulators

When a robotic manipulator is used for outer space or deep sea missions then the fault tolerance of the manipulator is essential. In a general sense, the fault tolerance contributes to the dependability of the manipulators. In the recent trend of robotics research, the dependability is necessary for the robotic arms working within human interaction, for instance robots in tele-surgery applications. The fault tolerant manipulators remain functional and the tasks of the manipulators are maintained even under failure. Fault tolerant robots require appropriate consideration in design and the operation of the manipulators as well as specific control strategies for the faulty manipulators. These considerations are taken into account to design the manipulators to optimally maintain the tasks despite the joint failures. In this project, different issues of the optimal design, optimal planning, and optimal control of fault tolerant manipulators are studied. The work so far includes the fault tolerant workspace analysis of the manipulators, fault tolerant motion and fault tolerant force of the manipulators, and optimality of the design of the manipulators. We have tested the frameworks of the fault tolerant motion and force of the manipulators and we have found that the optimality in design is related to a class of symmetric hyper geometries. The fault tolerance has been also studied for multiple cooperative manipulators and in human-robot cooperation in load carrying scenarios.

A sample of a fault tolerant trajectory of a robotic manipulator
A sample of a fault tolerant trajectory of a robotic manipulator.
The trajectory of the faulty manipulator (dashed curve) is maintained very close to the trajectory of a healthy manipulator (solid curve)

back to top

 

Force Field Analysis

Potential fields and force fields analysis have been identified as an efficient approach for the modelling and simulation of dynamic and spatial systems. The applications of the force filed analysis are diverse. In fact, it is used for motion and path planning for the robots and unmanned vehicles, for modelling of atomic structures, and dynamics of the entities. It has been also deployed for social and psychological modelling. The application of the force field for crowd dynamics, haptics systems, and robotics applications are the current research activities within CISR.

For instance, mission and path planning in dynamic environments for multiple mobile robots or unmanned vehicles has been developed in 2D and 3D spaces based on force filed analysis.

Mission assignment and path planning for multiple robots with cooperation in a 2D space
The figure above indicates mission assignment and path planning for multiple robots with cooperation in a 2D space. Collision free path for the robots (3 robots indicated with 1,2,3) are obtained. The robots are required to cooperatively move to eight target points while avoiding collision with the manoeuvring obstacles. The stability analysis, the higher level control and decision making algorithms, and using force filed in novel areas of interest in CISR

back to top

 

Dynamic base station relocation for sensor networks

Energy conservation and network lifetime are two important factors in sensor networks. This research investigates development of an efficient, practical algorithm that dynamically repositions a mobile base station to reduce the distance between sensors so the transmission energy used in the routing topology is minimised and the lifetime of the network is extended.

Dynamic path of the mobile base station

back to top

 

Deakin University acknowledges the traditional land owners of present campus sites.

27th March 2012