Posts Tagged ‘ testing ’

Thursday, September 10th, 2009
by Gary Myers

Since my post last week, I’ve received a number of questions and comments so I thought I’d address them as a post (hopefully for everyone’s benefit.)

As Steve Mitchell points out, the end-user definitely has an impact on the performance of the analytics and training is needed.  He drew the analogy last week about pilots and airplanes.  Stretching this pilot/plane analogy a bit, the end-users are the pilots and OV is a part of the airplane (with the whole plane being the end product delivered to market by our partners.)   OV builds a component that works in many different planes and our responsibility is to make sure it performs in a wide variety of settings.  Our OEM partners deliver the complete plane, which includes working with the pilots (users) to understand how to operate the plane (product) most effectively.

As part of the release process, we qualify our software in several ways:

  • Science testing to validate the newest release is at least as good, if not better, than prior releases. These automated tests utilize thousands of hours of videos and corresponding rules to approximate real-world scenarios. These are compared against the baseline results taking into account the metrics listed in my post.
  • Product testing to ensure that the whole product works end-to-end, including manual testing to approximate the end-user experience.

ObjectVideo focuses on testing our software for release to our partners. Different partners focus on different areas so our partners are in the best position to provide the performance criteria to the end market based upon their own test methodologies, results and sales programs. In this way, they can effectively support their analytics-enabled products and know, as well, that those products are meeting the needs of their customers.

Thursday, September 3rd, 2009
by Gary Myers

Since OV is the leader in the industry, we get asked a lot about analytics performance.  This can be hard to quantify as there are a lot of contributing factors. In general, accurate event detection is affected by some combination of camera angle, camera placement, lighting conditions, other environmental factors and system configuration. The goal when deploying and configuring an analytics-enabled system is to strike the proper balance between being too sensitive (causing false events) and not sensitive enough (causing missed events).

Over the years of building and testing our software, we’ve focused on three primary testing criteria when determining performance metrics: number of detected events, false events and missed events. The ideal case is to detect all expected events but have low numbers of false and missed events. If you catch all the expected events but you still have a lot of false ones, we would consider performance low as there will be too many nuisance events.  Likewise with the missed events – miss too many then overall user confidence goes down.

In future posts, I’ll cover some ways to improve effectiveness, either through camera setup or system adjustments, to enable the user to get the most from their investment in analytics.