Humans make mistakes, computers, by definition-- don't. So, what is gained by manually grinding out hours of testing? Besides, having the skill to operate variously branded remotes professionally, not much. My name is Richard "Boo" Strachan and throughout my internship, I have been trying to help with automation of many time-consuming processes dealt with daily in the Digital Living Consortium (DLC).
Although code has to be as close to error-less as possible (as to not produce faulty results), it does not necessarily have to be 'fast'. Even if manual testing works out to be many times faster, creating a program capable of performing tests independently means the ability to run more tests with fewer people and run the tests overnight. I'm currently working under Mike Johnson, also a DLC team-member, and after asking him about the benefits of automated testing he replied, "Robots don't have labor-laws.”
Sean 2.0 is an automation program based in Java with the potential to revolutionize workflow in the DLC. At the time of Sean 2.0's inception six months prior, Sean Curran, a DLC testing engineer and human, had broken his shoulder rendering him unable to work. Mike designed the program as Sean's replacement and has led production since.
So, we ask, is automation worth the effort? While some might think the primary goals of Sean 2.0 are to increase productivity and reduce costs, it actually has another: to increase the quality and precision of results. Having the program recognize a properly rendered image within the Digital Living Network Alliance (DLNA) guidelines without worry of miss clicking or miss judging is huge in itself. This is not to mention the ability for the program to do this for hundreds of file types...without complaining.
Ideally, the program will reduce consortium overhead significantly. Rather than having new employees spend time learning how to test (also a time consuming process) they will instead learn to start the program and while testing is on going: learn of additional skills beneficial to them outside the lab.
The initial amount of time needed to code is large, but well worth it as Sean 2.0 will open many possibilities to the consortium. After the program is completed, it is possible that vendors will be able to expedite testing by shipping more devices to be tested in parallel. As we have already virtualized most test beds and multiple VM's can be run at the same time, with just a few physical machines many devices can be tested simultaneously, and now; remotely.
Programming can't do everything for us. If the program tells us there is an error, we will need to manually check and troubleshoot what the vendors need to improve. If we are sent more devices, more time will be spent writing reports. If we get a new device with a different folder structure, more programming will need to be done. Will we lose our testing jobs to robots? The answer is no, but we might become programmers instead.
Richard "Boo" Strachan, 2012 High School Summer Intern