By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a trial. In this eight-part series of posts I’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
Source Code Never Runs
Like attorneys, software programmers are in the business of technicalities, where small distinctions can sometimes make a big difference. This is particularly important in software patent cases, which can often hinge on very minor, esoteric aspects of code (very minor to a non-computer-scientist, at least.) One of those potential trivialities is the distinction between the code written by programmers and what instructions the computer executes. As we shall see, virtually all the code written by programmers never runs on any computer. How is this possible? To explain this apparent contradiction, we need a bit of history.
Computers prefer numbers
Alan Turing was an early computer scientist who was instrumental in the Allies’ World War II victory. Seventy years ago he theorized a device, now called a Turing machine, that could read and write limitless numbers on an endless paper tape, to complete arbitrary mathematical tasks.
Later, in the 1970s, the first build-it-yourself home computer, the Altair 8800, was great at mathematical tasks, but not so great at inputting them: it had no keyboard. To enter a number into the Altair, you had to look up which switch positions on the front panel represented the number you wanted to enter, flip the switches accordingly, then flip a final switch to commit the number into memory.
By Quandary Peak Research
Quandary Peak Research expert Dr. Shahin Nazarian was deposed in several IPR cases on March 15, March 16, April 4 and April 6, 2016. He was the lead testifying expert witness on six contested patents. Nazarian was asked to opine on the invalidity of US patents 6,519,659, 6,487,656, 6,633,976, 6,373,498, 6,401,202, and 6,892,304. Nazarian drew on his industrial and research skills in OS and firmware design and development from both hardware and software domains. He was engaged by Stadheim & Grear, who represented the patent owner, Kinglite Holdings Inc., against the petitioners, American Megatrends, Inc., Micro-Star International Co., Ltd, MSI Computer Corp., Giga-Byte Technology Co., Ltd., and G.B.T., Inc.
US 6,519,659: Method and System for Transferring an Application Program from System Firmware to a Storage Device
Basic Input and Output System (BIOS) firmware is the software shipped on a BIOS ROM or FLASH chipset with a PC system. Firmware is the ROM-based software that controls a computer between the time it is turned on and the time the primary OS (operating system) takes control of the machine. Firmware’s responsibilities include testing and initializing the hardware, determining the hardware configuration, loading or booting the OS, and providing interactive debugging facilities in case of faulty hardware or software.
This patent discloses a system and a method to deliver applications from system firmware to a storage device without the need for or availability of an OS and/or a directory service. Prior to booting an OS on the processor-based system, the stored instruction sequences cause the processor to write the contents of a storage element to the storage device.
This patent also teaches a system and a method to endow remote communication with a service computer to provide the user computer access a database. The method first writes the contents of at least one storage element to […]
By Quandary Peak Research
Two Quandary Peak experts and their teams presented their latest research at the twenty-sixth International Symposium on Software Reliability and Engineering, one of the most accredited IEEE events.
Dr. Mahdi Eslamimehr with his teammate from MIT won the ISSRE 2016 Best Paper award for “AtomChase: Directed Search Towards Atomicity Violations”, the result of three years’ work developing a state-of-the-art debugging tool for super computers. The Best Paper award is given to top researchers whose work influences academy and industry.
Atomicity violation is one of the main sources of concurrency bugs. Empirical studies show that the majority of atomicity violations are instances of the three-access pattern, where two accesses to a shared variable by a thread are interleaved by an access to the same variable by another thread. Dr. Eslamimehr presented a novel approach to atomicity violation detection that comprises two parts: 1. execution schedule synthesis, and 2. directed concurrent execution based on constraint solving and concolic execution. In comparison to five previous tools on 22 benchmark codebases (like Apache Tomcat with 4.5 million lines of Java code), AtomChase increased the number of three-access violations found by 24% and found errors in programs were wrongfully assumed to be bug-free. To prevent reporting false alarms, Dr. Eslamimehr and his colleague confirmed sufficient conditions for non-atomicity of three-access pattern traces. These conditions could recognize 89% of the actual atomicity violations found by AtomChase. Because checking these conditions is two orders of magnitude faster than a brute-force check, AtomChase has made it possible to improve the quality of industrial software like the popular Eclipse integrated development environment.
Another interesting study on end-to-end Android application analysis was presented at the same symposium by Dr. Sam Malek and his research group at University of California, Irvine.
Pervasiveness of smartphones and the vast number of corresponding apps […]
By Mahdi Eslamimehr
When human health and safety depends on software, correct system operation is the most important concern. Unfortunately, a lack of proper testing and verification in such systems allows program defects to remain undetected and later turn into hazardous failures. Here are a few examples from a long history of failing programs that have cost lives:
-Therac-25 (the successor of Therac-6 and Therac-20) was a radiation therapy machine manufactured by Atomic Energy of Canada Limited (AECL). The machine had two modes of radiation therapy: high power and low power. In Therac-25’s predecessors, switching between modes was controlled by a hardware lock. The hardware lock was replaced by a software lock in Therac-25 and failed because of an undetected race condition. This fatal bug caused the death of 3 patients between 1985 and 1987. Several lawsuits were filed as a result of these accidents in the late 80’s. Nancy Leveson, a professor at the University of Washington, and Clark Turner, a graduate student at the University of California at Irvine, conducted a long investigation into the software defect and published a report in IEEE Computer in 1993.
-Toyota was forced to recall more than 10 million vehicles between 2009 and 2011 due to an unintended acceleration problem. Reports from National Highway Traffic Safety Administration documented 6,200 complaints involving unintended acceleration of Toyota vehicles. Further investigation revealed 89 deaths and 57 injuries were potentially linked to major Toyota recalls. Even NASA investigated the issue and published a public report with their findings . Several problems were detected in Toyota’s Electronic Throttle Control System (ETCS) and chipsets. NASA’s investigation traced these defects to important software development mistakes. For example, NASA confirmed that no timing analysis had been performed. In particular, no worst-case execution timing (WCET) analysis was conducted because of the complex nature of the […]
By Nupul Kukreja
Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!
Software maintenance activities can be classified as :
- Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
- Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
- Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
- Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)
Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:
How maintainable is my application/source-code, really?
The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.
So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small […]
By Quandary Peak Research
Brad Ulrich, a computer scientist at Quandary Peak Research, was recently engaged to provide software analysis and consulting for Step-By-Step Academy, an Ohio-based company, in the matter of Step-By-Step Academy, Inc. v. Special Learning, Inc in U.S. District Court in the Southern District of Ohio. Special Learning is a Chicago-based company.
The case involves a dispute over the development of electronic health record (EHR) software and software designed to meet Commission on Accreditation of Rehabilitation Facilities (CARF) compliance standards. Mr. Ulrich is evaluating software developed under contracts, and the technical performance and requirements under those contracts. Claims in the case include breach of written and oral agreements/contracts, detrimental reliance, promissory estoppel, quantum meruit, and breach of covenant of good faith and fair dealing.
Mr. Ulrich’s work with the case is ongoing.
By Sam Malek
Professor Sam Malek, one of Quandary Peak’s computer and software experts, was deposed in a patent case on December 13, 2013. Malek was the lead testifying expert witness on several of the asserted patents. In particular, he was asked to opine on the infringement of U.S. Patent No. 8,350,694 and U.S. Patent No. 8,439,202, as well as the invalidity of U.S. Patent Nos. 7,262,690, 7,911,341, 8,073,931, 8,335,842, 8,473,619, and 8,478,844.
Malek drew on his knowledge, skills, and training in distributed software design and development, including software intended for execution on wireless sensor networks and mobile platforms.
The case, iControl Networks Inc. v. Alarm.com Inc., was filed with the U.S. District Court for the Eastern District of Virginia. Malek was engaged by Cravath, Swaine & Moore LLP, a renowned international law firm consistently ranked among the world’s most prestigious with offices in New York City and London.
By Quandary Peak Research
A FoxNews.com story on computer glitches in the HealthCare.gov website features several quotes by Quandary Peak software expert George Edwards. The article adds to a recent flurry of reports chronicling bugs in the new federal insurance marketplace. Dr. Edwards explained the inherent challenges faced by engineers in building the site, and noted that it’s impossible to tell at this stage how serious the defects are.
“I wouldn’t rule out the possibility” of ongoing problems with the website, Edwards is quoted as saying. If flaws exist in the site’s core architecture, they could take some time to fix.
Edwards also pointed out how the site’s engineers were, in some sense, given an impossible task. For example, the October 1 unveiling of the site was driven by public policy, rather than technical, considerations.
The full article is available here on the FoxNews.com website.
By Quandary Peak Research
Cellular data usage for tablets is up a marked 48% year on year in the first quarter of 2013, according to a data consumption report released by consumer market research firm NPD Group. This isn’t a huge surprise, considering people are increasingly shifting video consumption and web browsing to tablets, and given consumers use cellular when Wi-Fi, mobile hotspots, or tethering aren’t options.
A 48% jump in data usage may seem sizable, but it still only brings cellular usage for tablets to 12% of the total market in the US—meaning only three out of every twenty-five tablets are using cellular. Taken together with Consumer Electronics Association’s survey showing that 41 percent of the online US consumers polled already own a tablet and 72 percent plan to purchase one soon, this is an indication that there is significant market share for cellular data usage (and tablets) up for grabs.
What Does That Mean for Google’s Jelly Bean?
This past July, after much speculation and fanfare, Google finally released a new update to its Android operating system. Dubbed Android 4.3 Jelly Bean, it’s not available for everyone. Google has reserved the July release exclusively for its own line of products like the new Nexus 7. Reports, however, suggest that it will soon be available for widespread use.
Early chatter from the user community seems to indicate that Android 4.3 is making devices run faster, and in particular, Kevin C. Tofel reports that users of 2012 Nexus tablets are experiencing much better performance. On gigaom.com, Tofel highlights a specific software command in Android 4.3 called TRIM that works like “the old Disk Defragmenter on Microsoft Windows PCs which cleaned up the file system and put contiguous file bits in order on the hard drive to speed up I/O performance.”
Another interesting […]
By Ivo Krka
A significant portion of today’s mobile devices run on top of Android OS. One of the reasons for the widespread adoption of Android is its open source nature. While this allows any programmer to look beyond the basic API (referred to as the Application Framework), understanding and modifying the lower layers of the Android stack is difficult. This becomes apparent as soon as you download the huge 12 GB of Android’s source code. In addition, while the Application Framework’s Java code is accompanied with detailed comments, the remainder of the code base is not thoroughly documented (if at all). To make things worse, the undocumented part of the code base is more complex, uses intricate IPC mechanisms, and switches between programming languages.
In this article, I present the architecture of the media playback infrastructure (with Stagefright as the underlying media player). My goal is to help you, an interested reader, get a grasp of how things work behind the curtains, and to help you more easily identify the part of the code base you may want to tweak or optimize. Some useful online resources already outline certain bits and pieces of the media player architecture (1 2), but the slideshow format of these descriptions omits a number of important details.
The architecture of the media player infrastructure, from the end-user apps to the media codecs that perform the algorithmic magic, is layered with many levels of indirection. The following diagram depicts the high-level architecture described below.
At the topmost layer of the architecture are the user apps that leverage media playback functionality, such as playing audio streams, ringtones, or video clips. These apps use the components from the Application Framework that implement the high-level interfaces for accessing and manipulating media content. Below this layer, things get more complicated […]