Defence systems integration and robotics

IISRI creates systems and solutions for the Australian Defence Force (ADF). We have delivered multiple capability technology demonstrator-funded outcomes and contributed to ADF steering committees and advisory groups. We also hold a number of international patents. These cover technology developed for counter improvised explosive devices, chemical, biological, radiological and nuclear defence robotics and motion simulators.


Defence in detail

We provide a broad suite of on-the-ground solutions to the ADF, such as:

  • gas detection systems 
  • tele-operated systems for hazardous threat response
  • collision avoidance technology.

Research projects

Some of our research initiatives and developments include:

OzBots – defence systems in action

Deakin-designed OzBots are capable of climbing stairs, carrying a person – even towing a car. The remotely controlled OzBots are already in service within a number of Victorian and Queensland police units. 

Their potential in other fields is also emerging, with applications in areas such as domestic law enforcement, aeronautics and environmental management being explored.

OzBot co-inventor Dr Mick Fielding explains that the OzBots are usually used as a first responder in security events, providing operators with live video, visible and IR illumination and bi-directional audio.

Read the full story

Read about Dr Fielding's research

Haptically enabled tele-operative systems for IED render safe

This research investigates developing state-of-the-art technologies specifically for remote render-safe of improvised explosive devices (IED) through a capability technology demonstrator contract being awarded by the Australian Defence Force.

Using haptics (force feedback technology), stereovision (binocular video stream for depth perception) and natural user interface, the robots have been engineered to deliver maximum effectiveness while maintaining a minimal training liability.

IISRI's OzBot series of mobile platforms have been used by the Victorian Police in a first-responder capacity, exploiting the 30-second system boot-up and man-portable design to get eyes-on-target as quickly as possible. The research focuses on how to reduce operator fatigue and minimise training liability by creating a transparent operator ‘tele-presence’. This assists in developing simulation technologies for increased training availability and mobile platforms with increased range, payload, manipulator reach and capability.

Gas detection with artificial intelligence for a military robot

Pattern recognition with artificial neural networks (ANNs) is well researched. Applying this to gas detection, using an ANN that receives input from an array of gas sensors, is what's being implemented on the Ozbot. The various sensors feeding the ANN allow the processor to determine gas type and to quantify the gas. 

The design is based around a low-cost module with a purpose-built circuit board. Implementation of the ANN is in hardware of the Field Programmable Gate Array (FPGA) and the processor is a logicware-based NIOS processor in the Altera FPGA. This custom processor allows a 24 digital data bus connection to the analog-to-digital converter (ADC), whose sensor signal conditioning circuits connect to the ADC and remove any spurious interfering signals. 

The gas-sensing module uses an array of sensors to determine the type and concentration of the detected gas. Training of the ANN is performed under controlled conditions where a known dataset (concentration of target gas) is applied to the module. The ANN in hardware needs to use an approximation function for the transfer function of the algorithm. The function is tested with known datasets and compared for an acceptable error margin for accurate determination of gas type and concentration.

Autonomous data fusion for enhanced situational awareness

Data fusion algorithms are becoming vital tools for situational awareness in domains where decision-making is dependent on combined information from multiple sources or sensors. A typical scenario could be a swarm of cooperative mobile robots or sensors operating in a hazardous environment. In a case like this, there must be some scale defining how good or bad the sensed signals are before sending them to the decision node. 

In general, maintaining autonomy in fusion systems requires clear identification of the fusion objective function, fusion errors, worthiness of signals, and the maximum amount of information to be squeezed into certain spatial dimensions. 

By definition, a fusion algorithm aims to transfer informative features from source signals into the fused outcome. Due to the technological limitations – such as bandwidth, processing capacity, battery capacity and health, storage and response time – fusing all features from both source signals into the fused one is not possible. Therefore, the objective of signal fusion can be redefined into: 1) transferring important features from source signals, and 2) ignoring the non-important features and minimising their effect on the fused signal. 

According to these objectives, Type-I fusion error, also known as false negative, is an estimation of a number of important features that have not been identified as fusion-worthy. This is the type of error that all fusion performance metrics have been measuring so far. On the other hand, Type-II fusion error, also known as false positive, is the error of fusing a feature that is not fusion-worthy. This kind of error is also known as fusion artifacts. These can be seen in the figure below.

In order to measure these two types of errors, a perfectly fused test case should be developed as a control case. The perfect fusion needs to be carried out on two different source signals where we certainly know what the results should be. The perfect fusion requires identifying control cases, namely 0- and ∞- signals, for both Type-I and Type-II errors.

Augmented collision detection using stereo imagery in an unstructured environment

Stereoscopic teleoperation of remotely manipulated robots is a well-defined and researched field. However, the stereo cameras necessitated by this system can be utilised for much more than simply providing stereoscopic video for the operator. The stereo cameras can be used for a SLAM system to enable robot localisation and navigation, and allow full 3D mapping and surface reconstruction of the environment. 

By utilising these, a virtual model of the robot's environment can be built up and, by integrating this model with a full kinematic model of the robot, can be used for simulation and task planning purposes. 

Potential applications are to allow the operator to view the virtual environment on the stereo display and thereby:

  • get different viewpoints of the robot and its environment than those currently offered by the cameras
  • revisit previously traversed areas of the environment
  • conduct task pre-planning and simulation with the robot model before actually attempting the task. 

The simulation of the robot in the virtual environment would also enable the prediction of possible collisions and unwanted interactions with the environment, therefore allowing the operator to be alerted to potential dangers before they occur. 

This research is focusing on the development of this novel framework that, using stereo cameras already present for stereoscopic teleoperation, can facilitate these applications.

Competitive bidding strategies for controlling autonomous mobile elements

Controlling autonomous mobile elements is a key question for many application domains. The main objective is providing effective coordination between team members in order to fulfil the mission objectives. 

Deploying mobile elements for handling the data collection operation in wireless sensor networks has proven to be effective in minimising the energy consumption of the sensors and prolonging the network lifetime. Traditional mechanisms are based on either a spatial or temporal distribution, which lack the utilisation of the best team member to carry out the current task. On the contrary, the competitive scheme presented in this work employs bidding strategy based on a single-item, lowest price, sealed-bid auction. 

Each mobile element competes for winning the current task by submitting a bid relevant to its ability for servicing the task. The mobile element with the minimum bid is considered the winner and is granted the task. The results demonstrated clearly show that the proposed control scheme outperforms traditional schemes in minimising data collection time as well as the distance travelled by each mobile element.

Optimal fault tolerant robotic manipulators

When a robotic manipulator is used for outer space or deep-sea missions, the fault tolerance of the manipulator is essential. In a general sense, the fault tolerance contributes to the dependability of the manipulators. In the recent trend of robotics research, dependability is necessary for the robotic arms working within human interaction, such as robots in tele-surgery applications. 

The fault tolerant manipulators remain functional and the tasks of the manipulators are maintained even under failure. Fault tolerant robots require appropriate consideration in design and the operation of the manipulators, as well as specific control strategies for the faulty manipulators. These considerations are taken into account to design the manipulators to optimally maintain tasks despite the joint failures. 

In this project, different issues of the optimal design, planning and control of fault tolerant manipulators are studied. The work so far includes the fault tolerant workspace analysis of the manipulators, fault tolerant motion, fault tolerant force of the manipulators and optimality of the design of the manipulators. 

We have tested the frameworks of the fault tolerant motion and force of the manipulators and have found that the optimality in design is related to a class of symmetric hyper geometries. The fault tolerance has also been studied for multiple cooperative manipulators and in human-robot cooperation in load-carrying scenarios.

Force field analysis

Potential fields and force fields analysis have been identified as an efficient approach for the modelling and simulation of dynamic and spatial systems. Force field analysis can have very diverse applications. It's used for motion and path planning for robots and unmanned vehicles, modelling of atomic structures, dynamics of entities, and social and psychological modelling. 

The application of the force field for crowd dynamics, haptics systems and robotics applications are the current research activities within IISRI.

For instance, mission and path planning in dynamic environments for multiple mobile robots or unmanned vehicles has been developed in 2D and 3D spaces based on force field analysis.

Dynamic base station relocation for sensor networks

Energy conservation and network lifetime are two important factors in sensor networks. This research investigates development of an efficient, practical algorithm that dynamically repositions a mobile base station to reduce the distance between sensors. This is so the transmission energy used in the routing topology is minimised and the lifetime of the network is extended.