by Gary Myers
Since my post last week, I’ve received a number of questions and comments so I thought I’d address them as a post (hopefully for everyone’s benefit.)
As Steve Mitchell points out, the end-user definitely has an impact on the performance of the analytics and training is needed. He drew the analogy last week about pilots and airplanes. Stretching this pilot/plane analogy a bit, the end-users are the pilots and OV is a part of the airplane (with the whole plane being the end product delivered to market by our partners.) OV builds a component that works in many different planes and our responsibility is to make sure it performs in a wide variety of settings. Our OEM partners deliver the complete plane, which includes working with the pilots (users) to understand how to operate the plane (product) most effectively.
As part of the release process, we qualify our software in several ways:
Science testing to validate the newest release is at least as good, if not better, than prior releases. These automated tests utilize thousands of hours of videos and corresponding rules to approximate real-world scenarios. These are compared against the baseline results taking into account the metrics listed in my post.
Product testing to ensure that the whole product works end-to-end, including manual testing to approximate the end-user experience.
ObjectVideo focuses on testing our software for release to our partners. Different partners focus on different areas so our partners are in the best position to provide the performance criteria to the end market based upon their own test methodologies, results and sales programs. In this way, they can effectively support their analytics-enabled products and know, as well, that those products are meeting the needs of their customers.