focusing on spherical balls of various sizes, we are now experimenting with
various objects of unknown dynamic characteristics, such as sponge balls,
long cylindrical cans, and paper airplanes. Our system uses low cost vision
processing hardware for simple information extraction. Each camera signal is
processed independently on vision boards designed by other members of the MIT
AI Laboratory (the Cognachrome Vision Tracking System). These vision boards
provide us with the center of area, major axis, number of pixels, and aspect
ratio of the color keyed image. The two Fast Eye Gimbals allow us to locate
and track fast randomly moving objects using "Kalman-like"
filtering methods assuming no fixed model for the behavior of the motion.
Independent of the tracking algorithms, we use least squares techniques to
fit polynomial curves to prior object location data to determine the future
path. With this knowledge in hand, we can calculate a path for the WAM to
match trajectories with the object to accomplish catching and smooth
object/WAM post-catching deceleration.
|In addition to
the basic least squares techniques for path prediction, we study
experimentally nonlinear estimation algorithms to give "long term"
real-time prediction of the path of moving objects, with the goal of robust
acquisition. The algorithms are based on stable on-line construction of
approximation networks composed of state space basis functions localized in
both space and spatial frequency. As a initial step, we have studied the
network's performance in predicting the path of light objects thrown in air.
Further application may include motion prediction of objects rolling,
bouncing, or breaking up on rough terrains.
successful results for the application of this network have been obtained in
catching of sponge balls and even paper airplanes!