Archive for the ‘ tech talk ’ Category

 
Monday, April 12th, 2010

by Gary Myers

Earlier this year, ObjectVideo released an updated OV Ready specification and the corresponding OV Ready reference application (which now includes an ‘event push’ functionality) to our partners to incorporate onto their devices. In addition, we provided the 1.0 version of our new web UIs, which we call the ObjectVideo management console.

For those who are unfamiliar with OV Ready, it is an ObjectVideo program that constitutes a protocol specification, reference code and compliance tools to allow interoperability between various devices and applications with respect to video analytics operations and alerting. More detailed info is provided on our website.

Even though that one-sentence summary is quite a mouthful, think of HTTP as an analogy. At a basic level, it is a protocol that specifies how web browsers and web servers can talk. It doesn’t matter if you are using Firefox or Chrome or IE, you can still talk to IIS or Apache. OV Ready is similar except the focus is on ObjectVideo OnBoard configuration, rule management and event output. We want to make sure all ObjectVideo-enabled devices (cameras, encoders, servers, etc.) can be used by a wide variety of management applications (VMS, PSIMs, etc.) regardless of type or brand.

(more…)

 
Thursday, September 10th, 2009
by Gary Myers

Since my post last week, I’ve received a number of questions and comments so I thought I’d address them as a post (hopefully for everyone’s benefit.)

As Steve Mitchell points out, the end-user definitely has an impact on the performance of the analytics and training is needed.  He drew the analogy last week about pilots and airplanes.  Stretching this pilot/plane analogy a bit, the end-users are the pilots and OV is a part of the airplane (with the whole plane being the end product delivered to market by our partners.)   OV builds a component that works in many different planes and our responsibility is to make sure it performs in a wide variety of settings.  Our OEM partners deliver the complete plane, which includes working with the pilots (users) to understand how to operate the plane (product) most effectively.

As part of the release process, we qualify our software in several ways:

  • Science testing to validate the newest release is at least as good, if not better, than prior releases. These automated tests utilize thousands of hours of videos and corresponding rules to approximate real-world scenarios. These are compared against the baseline results taking into account the metrics listed in my post.
  • Product testing to ensure that the whole product works end-to-end, including manual testing to approximate the end-user experience.

ObjectVideo focuses on testing our software for release to our partners. Different partners focus on different areas so our partners are in the best position to provide the performance criteria to the end market based upon their own test methodologies, results and sales programs. In this way, they can effectively support their analytics-enabled products and know, as well, that those products are meeting the needs of their customers.

 
Thursday, September 3rd, 2009
by Gary Myers

Since OV is the leader in the industry, we get asked a lot about analytics performance.  This can be hard to quantify as there are a lot of contributing factors. In general, accurate event detection is affected by some combination of camera angle, camera placement, lighting conditions, other environmental factors and system configuration. The goal when deploying and configuring an analytics-enabled system is to strike the proper balance between being too sensitive (causing false events) and not sensitive enough (causing missed events).

Over the years of building and testing our software, we’ve focused on three primary testing criteria when determining performance metrics: number of detected events, false events and missed events. The ideal case is to detect all expected events but have low numbers of false and missed events. If you catch all the expected events but you still have a lot of false ones, we would consider performance low as there will be too many nuisance events.  Likewise with the missed events – miss too many then overall user confidence goes down.

In future posts, I’ll cover some ways to improve effectiveness, either through camera setup or system adjustments, to enable the user to get the most from their investment in analytics.