ONVIF Analytics
I wanted to explore more features of my webcam (TV-IP311PI). Apart from probing and streaming it can also do “video analytics”. That is the ONVIF name for motion detection.
From a system architecture point of view, it makes more sense to have each camera analyse its own view field and only then send image data. If we have a central system analysing data from many cameras the data volume is much larger. Obviously the central system needs to receive image streams from all cameras in order to analyse them. The corresponding ONVIF specification defines
- AnalyticsModules, i.e. detectors and
- Rules and
- SceneDescriptions
AnalyticsModules are mostly hard coded in the device’s firmware. We can only influence a small number of parameters. Parameters depend on the type of module.
Rules on the other hand are more flexible. A rule defines when to trigger which action. The parameters again depend on the type of rule. We can change rules and even create new ones. How many rules a device permits depends on its firmware.
SceneDescriptions are then the content of the messages sent if a rule triggers. Again depending on the device firmware’s capability SceneDescriptions might include the number and location of objects identified by the motion detectors.
A sample analytics configuration from my camera is shown below.
<trt:GetVideoAnalyticsConfigurationsResponse> <trt:Configurations token="VideoAnalyticsToken"> <tt:Name>VideoAnalyticsName</tt:Name> <tt:UseCount>2</tt:UseCount> <tt:AnalyticsEngineConfiguration> <tt:AnalyticsModule Name="MyCellMotionModule" Type="tt:CellMotionEngine"> <tt:Parameters> <tt:SimpleItem Name="Sensitivity" Value="0"/> <tt:ElementItem Name="Layout"> <tt:CellLayout Columns="22" Rows="15"> <tt:Transformation> <tt:Translate x="-1.000000" y="-1.000000"/> <tt:Scale x="0.001042" y="0.001852"/> </tt:Transformation> </tt:CellLayout> </tt:ElementItem> </tt:Parameters> </tt:AnalyticsModule> </tt:AnalyticsEngineConfiguration> <tt:RuleEngineConfiguration> <tt:Rule Name="MyMotionDetectorRule" Type="tt:CellMotionDetector"> <tt:Parameters> <tt:SimpleItem Name="MinCount" Value="5"/> <tt:SimpleItem Name="AlarmOnDelay" Value="100"/> <tt:SimpleItem Name="AlarmOffDelay" Value="100"/> <tt:SimpleItem Name="ActiveCells" Value="1wA="/> </tt:Parameters> </tt:Rule> </tt:RuleEngineConfiguration> </trt:Configurations>
This tells us that the camera has one AnalyticsModule, a CellMotionDetector. Which has cells arranged as 22 columns by 15 rows.
CellMotionDetector
Now, to understand this look at this webcam’s view. It has been divided into a grid of 8 by 6 cells.
Each cell can detect motion independently. In a corresponding rule we can then specify the cells that are to be monitored. For example, we may want to exclude regions with leaves that move in the wind. In our grid we masked the bushes in the front and the tree line on the horizon.
Now our rule will only fire if there is movement in the unmasked region of the image. The masked regions are ignored.
How does that translate to the XML rule definition? See the value ActiveCells="1wA="
in the example rule definition above. This has nothing to do with Watts or Ampères. The value is rather an encoded bit mask. An active cell is represented by a 1 and an inactive (masked, greyed) by a 0. That gives a string of 1’s and 0’s. In our example of 8×6 cells the string has a length of 48 bits. It is then run-length encoded and finally base64 encoded.
My next steps will be to receive the message triggered by this rule and finally to create an alarm (event) in ZoneMinder based on this message.
I will keep you posted.
Trackbacks / Pingbacks