Computer-Aided Ethology
the ethology of ethologists

How do humans perceive and analyze animal behavior?
Can computers simulate these functions?

Computer-aided ethology aims to simulate human ethological functions which employ visual observations and cognitive analysis.
In order to apply technologies that automate certain ethological functions, we must analyze and understand the complex behavior of ethologists

How do animals see other animals and extrapolate behavior ?

 

Can computers do this?

What components are required ?

Can computers be used to :
  • evaluate an observer's perspective,
  • a subject's motion, and
  • ascertain behavior?

Most of which are only recently available, at reasonable costs.

 

My Contribution :

As a pH.D. project ,  I captured the (X, Y) coordinate* and Surface Area,  of 4 mice,  during 4 hours,  (to a data file), every 40 millisecond  (25 data points (images) / second).  Each, of the four motion files contains approx. 360,000 records!

Suivi_3.gif (768939 bytes) Suivi_2.gif (740492 bytes)

 


3 principle componets  :   Input, Data and Behavior Analysis.


The problem, as I've defined it, is divided into 3 principle componets, 

Input (Image) Analysis

Data Analysis

 Behavior Analysis

How to capture the data. 

How to convert the data to motion and then to behavior.

How to create "knowledge" about behavior, which also entails measuring, evaluating and understanding.

 

Various technologies are available with many more under development.  For example :

Input Analysis

Data Analysis Behavior Analysis

Active Pixels Sensors,

Planar Optical Waveguides

Field-programmable gate arrays,

Transputers and

Storage &Compression.

Motion-Energy Images and Motion History Images,

 

Approximate World Models,

 

Generalized Image Matching  

Learning with multiples representations and multiple goals,

Kinematics variables,

Physically-based models

Environmental descriptives.

 

The technology associated with each componet can be improved using new, developing technologies. 

 


Computer vs. Human Vision

This is an image of just one basin with the mouse, the water and the food distributor.

Fundamental Differences  : From a machine vision perspective the image contains pixels with gray levels.  We recognize water, food, sides, floor, ...  We can defined objects at fixed coordinates and track objects which contrast from other objects.  However, it is difficult to imply impression, i.e. "near the food". The subject (i.e. mouse) surely sees all this from another perspective.

Suivi_2a.gif (293198 bytes) Image_position.gif (58463 bytes)

 

(* Note : the X,Y coordinate is the location of the berry center of the animal)

 


How to Validate?

Graphical representation of Basin and Motion Data

How to Present Data?   This diagram of motion data provides certain information but eliminates sequence and time.  Replacing and eliminating data could provide more information for humans but limit a computer's potential to resolve problems. This illustrates the mouse's activity and indicates area of interest, to the mouse!

Once behavior is assigned by a computer, how can humans verify the machines evaluation? Display and control programs can aid humans verify computer generated behavior. The data can be displayed and manipulated (indexed, sorted, graphed, ...) in order to obtain and verify information.

Mous3.gif (74293 bytes) El1.gif (1427968 bytes)

The right side image is a graphic representation of the basin, it's landmarks, the mouse's position and calculated motion data (i.e. speed, distance, etc.).  I developed the program, SUIVI, in order to explain and demonstrate "The Problem" and to aid verify certain results. 

 


General Vs Specific Problems

The problems encountered in this example are similar to those found in many HUMAN AND ANIMAL  motion / behavior studies.   Basically, there are two types of problem, specific (i.e. mice) and general (intelligent machine vision).

Although we're using mice in this example,  we are interested in the process and not the subject. 

We can adapt the technology to :
- animal tracking
- autonomic robots
- consumer browsing and shopping behavior studies
- sports and athletic performance aids
- security application

The software which we are developing for such applications must be very intuitive, adaptive, perform self-evaluations and optimize their functioning.

        The Specific Problem !


How to get from this............

 
to this ....

Time

X

Y

Surface

(milliseconds)

........

........

........

........

39500

65

251

442

39540

65

251

423

39580

65

252

424

39620

65

252

429

39660

66

252

459

39700

66

253

491

39740

66

255

526

39780

66

256

541

........

........

........

........

. . . .

Death,

Sleep,

Eat,

Drink,

Jump,

Explore,

. . . .

 

The General Problem !


What is needed.....


     How to do it.....

1) The "Bio logic" problem :

How do Animals (i.e. man) see animals and extrapolate behavior (i.e. information)?

What are the formulas to convert data to behavior?

Can there be an optimized model  ("One size fits all" formulas?)

2) The technical problems :

How can computers help do this?

What hardware and processes   / processors (pipeline, image processing, FPGA, Fuzzy logic, Camera, etc.) will be available?

How to develop and validate an intuitive user interface?!

 


Data Interpretation

Can programs be trusted to find behavior in all the data?  What does the data for sleep or a jump look like?   Or....?  Can the same programs find the same behaviors in other mice? What other behaviors can humans or computers find, or not find?

No or Little Motion
(dead, sleeping, resting?)

This data represents the number of times a (X,Y) and Surface area combination changes "very little"* and repeats itself without interruption.  Data with little change over time can indicate a behavior.

The definition of Death, Sleep or Rest  could depend only upon the duration of low change, no-motion!

   NO or LOW - MOTION DATA

    ..........................

 

 

Rapid Changes in Data
(jump?)

This graph represents the change in surface area over time. It shows a rapid increase and decrease in surface area in a period of .8 seconds.

The video cassette confirms that the mouse jumps up toward the camera causing the mouse to increase rapidly in perceived size and surface area.

Jump.gif (6557 bytes)

 

* Very little delta on all three axis during periods lasting between 5 and 10 minutes.  Experiences has shown that even sleeping animals "move" when seen through machine vision systems.

 


Group and Analyze Data

Regions or areas can be subdivided into geographical regions which are labeled (i.e. Aa, Ab,.. or 1, 2, 3,..).  The creation and labeling of regions depends upon analysis to be performed. The relation between regions can be of interest.   The time an animal occupies or traverses a region,  their transitions, .. Certain behaviors (i.e. drinking, sleeping, ) can be associated with a region (water distributor, floor area).

grid_1A.gif (8723 bytes) Positions.gif (6651 bytes)

 

 


Conclusion

 

There are many who are  working to apply machine vision and automatic behavior analysis tools. To see links to other sites click here : Ethology Links

We have a long way to go.

Most animals have developed the ability to observe other animals and obtain behavior.  For example, eat or be eaten behavior can be initiated by a sequences of visual stimuli combined with learned responses. 

But, how can machines find behavior in motion data?  Fortunately animal have more optimized computing power and developed responses.

Innovative techniques are required to develop software which is modular with long-term possibilities and can clearly communicate data analysis results.

 

For further technical details click here : User's Manual of Suivi

For a down load copy of SUIVI click here : Down Load Suivi


STILL UNDER CONSTRUCTION

Last updated 10/26/00 10:05:09

Hosted by www.Geocities.ws

1