ONVIF Analytics

I wanted to explore more features of my webcam (TV-IP311PI). Apart from probing and streaming it can also do “video analytics”. That is the ONVIF name for motion detection.

From a system architecture point of view, it makes more sense to have each camera analyse its own view field and only then send image data. If we have a central system analysing data from many  cameras the data volume is much larger. Obviously the central system needs to receive image streams from all cameras in order to analyse them. The corresponding ONVIF specification defines

  • AnalyticsModules, i.e. detectors and
  • Rules  and
  • SceneDescriptions

AnalyticsModules are mostly hard coded in the device’s firmware. We can only influence a small number of parameters. Parameters depend on the type of module.

Rules on the other hand are more flexible. A rule defines when to trigger which action. The parameters again depend on the type of rule. We can change rules and even create new ones. How many rules a device permits depends on its firmware.

SceneDescriptions are then the content of the messages sent if a rule triggers. Again depending on the device firmware’s capability SceneDescriptions might include the number and location of objects identified by the motion detectors.

A sample analytics configuration from my camera is shown below.

  <trt:Configurations token="VideoAnalyticsToken">
    <tt:AnalyticsModule Name="MyCellMotionModule" Type="tt:CellMotionEngine">
        <tt:SimpleItem Name="Sensitivity" Value="0"/>
        <tt:ElementItem Name="Layout">
          <tt:CellLayout Columns="22" Rows="15">
              <tt:Translate x="-1.000000" y="-1.000000"/>
              <tt:Scale x="0.001042" y="0.001852"/>
    <tt:Rule Name="MyMotionDetectorRule" Type="tt:CellMotionDetector">
        <tt:SimpleItem Name="MinCount" Value="5"/>
        <tt:SimpleItem Name="AlarmOnDelay" Value="100"/>
        <tt:SimpleItem Name="AlarmOffDelay" Value="100"/>
        <tt:SimpleItem Name="ActiveCells" Value="1wA="/>

This tells us that the camera has one AnalyticsModule, a CellMotionDetector. Which has cells arranged as 22 columns by 15 rows.


Now, to understand this look at this webcam’s view. It has been divided into a grid of 8 by 6 cells.


Each cell can detect motion independently. In a corresponding rule we can then specify the cells that are to be monitored. For example, we may want to exclude regions with leaves that move in the wind. In our grid we masked the bushes in the front and the tree line on the horizon.


Now our rule will only fire if there is movement in the unmasked region of the image. The masked regions are ignored.

How does that translate to the XML rule definition? See the value ActiveCells="1wA=" in the example rule definition above. This has nothing to do with Watts or Ampères. The value is rather an encoded bit mask. An active cell is represented by a 1 and an inactive (masked, greyed) by a 0.   That gives a string of 1’s and 0’s. In our example of 8×6 cells the string has a length of 48 bits. It is then run-length encoded and finally base64 encoded.

My next steps will be to receive the message triggered by this rule and finally to create an alarm (event) in ZoneMinder based on this message.

I will keep you posted.

Tags: , , , , , ,

Trackbacks / Pingbacks

  1. ONVIF Notifications | Tech Tids & Bits - September 13, 2014

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: