Our Awesome Animations and Animatronics Outreach Program allows high school students to experience university life and learn about advances in the information technology area.
A sample of the animation and video work our staff and students undertake as part of our research.
Created by Jonathan Wan, supervised by Asst/Prof Wei Liu.
This displays our school's AIBO (Artificial Intelligence RoBOt). A team of 6 AIBO's called the UWArriors will be trained for robot soccer competitions among other things.
Developed by OneTwenty (Anthony Prior, James Strauss, Minh Tran, Jason C. Wong) with Adam Matera and Poya Manouchehri.
Zyberflux is a little game project that the Onetwenty group created for the IZNullarbor game competition 2007 (and came second). Onetwenty is a group of PhD students mainly from UWA, and many were part of the 60hz real-time graphics research group. It was developed in linux with their own custom physics, animation and rendering systems, along with other libraries such as OpenGL, SDL and fmod(for sound)
Created by: Wong Tzu Yen, supervised by Professor Amitava Datta and Dr Peter Kovesi.
Image based rendering technique enables a short video of a crawling cockroach to become an infinitely long randomly loopback video texture.
Created by Craven Alexander, supervised by Assoc/Prof Du Huynh.
This video shows a reconstruction of a face and skull in 3D generated from a Computed Tomography (CT) scan of a human head.
Developed by Paul Bourke (Creator), Chris Fluke, Evan Hallein (Audio), Chris Power (Astrophysicist).
Large scale volume rendering of a cosmological N-body simulation. The projection is an angular fisheye intended for viewing inside a planetarium, originally created for ASTC (Association of Science Technology Centers) full dome show reel in 2005. Original sequence rendered at 3600 pixels square.
Created by Hong Chuan Yu, supervised by W/Prof Mohammed Bennamoun.
Before we can do face recognition, the portion of the image that contains the face must be first detected. Here is a demo on face detection using template matching. A frontal face template was used to detect the presence of a face in each frame of the video. We see that although the algorithm did not manage to detect the face when the subject turned his face side-way or tilted his head by a significant amount, as soon as the camera captured the frontal view of his face, the rectangle immediately locked onto his face again.