By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a trial. In this eight-part series of posts I’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
Source Code Never Runs
Like attorneys, software programmers are in the business of technicalities, where small distinctions can sometimes make a big difference. This is particularly important in software patent cases, which can often hinge on very minor, esoteric aspects of code (very minor to a non-computer-scientist, at least.) One of those potential trivialities is the distinction between the code written by programmers and what instructions the computer executes. As we shall see, virtually all the code written by programmers never runs on any computer. How is this possible? To explain this apparent contradiction, we need a bit of history.
Computers prefer numbers
Alan Turing was an early computer scientist who was instrumental in the Allies’ World War II victory. Seventy years ago he theorized a device, now called a Turing machine, that could read and write limitless numbers on an endless paper tape, to complete arbitrary mathematical tasks.
Later, in the 1970s, the first build-it-yourself home computer, the Altair 8800, was great at mathematical tasks, but not so great at inputting them: it had no keyboard. To enter a number into the Altair, you had to look up which switch positions on the front panel represented the number you wanted to enter, flip the switches accordingly, then flip a final switch to commit the number into memory.
By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a case. In this eight-part series of posts we’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
What is dead code?
Dead code is source code that is present in the program but doesn’t actually get used by the program. It’s not uncommon for a software product to have hundreds of thousands (even millions) of lines of code, and not all of them are used every time they’re run. Essentially, there are two situations which can result in dead code: one, if a condition required to run a piece of code is never fulfilled, or two, if the program provides no execution path into the code. In both cases the code exists in the program, but the program never executes it, resulting in dead code.
For the first situation, here’s an oversimplified example to illustrate the mechanism:
Above, if the $DAY_OF_WEEK variable has the value of “Monday” or “Friday”, either the complain_about_work or the make_dinner_reservations method will run. The method infringe_valuable_patent will never run because there’s no such day as “Kronsday”. Because the condition to execute the patent-infringing code will never be true, the infringe_valuable_patent method will never run; it is dead code.
No route to execution
With just a small change we can illustrate how the second […]
By Quandary Peak Research
The European Commission’s investigation of Google’s alleged misconduct has done nothing but expand and intensify over the last six years. Margrethe Vestager, the EU’s Competition Commissioner appointed in the midst of the investigations, has spearheaded the ‘don’t back down’ attitude taken by the EU. She resisted her predecessors’ efforts to settle the case with the tech giant, and has instead added to the list of accusations in the anti-trust lawsuit.
The accusations against Google now include earlier claims that the company abused its market dominance with unfair business practices pertaining to its comparison shopping services and search advertisements, as well as more recent allegations that it engaged in predatory business practices by imposing unfair restrictions on Android device manufacturers and mobile network operators. Earlier this year, the European Commission sent its second and third Statement of Objections to Google formalizing its charges against the company and heightening the profile of the case. Google continues to deny the charges and is working to settle the disputes, while avoiding heavy penalties. In other words, the battle is very much alive and well.
It may surprise many readers to realize this case is now well into its sixth year, so we’ve created a timeline to track the major developments and key findings of the case.
February 24, 2010 – The European Commission confirms receipt of three antitrust complaints against Google. The complaints were authored by Foundem, Ciao, and eJustice. The Commission informs Google of the complaints and asks the company to address the allegations.
November 30, 2010 – The European Commission opens an antitrust investigation of Google, citing allegations that Google has “abused a dominant position in online search”. The Commission’s press release indicates the investigation will explore three claims:
- That Google lowered the rank of unpaid search results of competing services
- That Google lowered the […]
By Quandary Peak Research
Two Quandary Peak experts and their teams presented their latest research at the twenty-sixth International Symposium on Software Reliability and Engineering, one of the most accredited IEEE events.
Dr. Mahdi Eslamimehr with his teammate from MIT won the ISSRE 2016 Best Paper award for “AtomChase: Directed Search Towards Atomicity Violations”, the result of three years’ work developing a state-of-the-art debugging tool for super computers. The Best Paper award is given to top researchers whose work influences academy and industry.
Atomicity violation is one of the main sources of concurrency bugs. Empirical studies show that the majority of atomicity violations are instances of the three-access pattern, where two accesses to a shared variable by a thread are interleaved by an access to the same variable by another thread. Dr. Eslamimehr presented a novel approach to atomicity violation detection that comprises two parts: 1. execution schedule synthesis, and 2. directed concurrent execution based on constraint solving and concolic execution. In comparison to five previous tools on 22 benchmark codebases (like Apache Tomcat with 4.5 million lines of Java code), AtomChase increased the number of three-access violations found by 24% and found errors in programs were wrongfully assumed to be bug-free. To prevent reporting false alarms, Dr. Eslamimehr and his colleague confirmed sufficient conditions for non-atomicity of three-access pattern traces. These conditions could recognize 89% of the actual atomicity violations found by AtomChase. Because checking these conditions is two orders of magnitude faster than a brute-force check, AtomChase has made it possible to improve the quality of industrial software like the popular Eclipse integrated development environment.
Another interesting study on end-to-end Android application analysis was presented at the same symposium by Dr. Sam Malek and his research group at University of California, Irvine.
Pervasiveness of smartphones and the vast number of corresponding apps […]
By Mahdi Eslamimehr
When human health and safety depends on software, correct system operation is the most important concern. Unfortunately, a lack of proper testing and verification in such systems allows program defects to remain undetected and later turn into hazardous failures. Here are a few examples from a long history of failing programs that have cost lives:
-Therac-25 (the successor of Therac-6 and Therac-20) was a radiation therapy machine manufactured by Atomic Energy of Canada Limited (AECL). The machine had two modes of radiation therapy: high power and low power. In Therac-25’s predecessors, switching between modes was controlled by a hardware lock. The hardware lock was replaced by a software lock in Therac-25 and failed because of an undetected race condition. This fatal bug caused the death of 3 patients between 1985 and 1987. Several lawsuits were filed as a result of these accidents in the late 80’s. Nancy Leveson, a professor at the University of Washington, and Clark Turner, a graduate student at the University of California at Irvine, conducted a long investigation into the software defect and published a report in IEEE Computer in 1993.
-Toyota was forced to recall more than 10 million vehicles between 2009 and 2011 due to an unintended acceleration problem. Reports from National Highway Traffic Safety Administration documented 6,200 complaints involving unintended acceleration of Toyota vehicles. Further investigation revealed 89 deaths and 57 injuries were potentially linked to major Toyota recalls. Even NASA investigated the issue and published a public report with their findings . Several problems were detected in Toyota’s Electronic Throttle Control System (ETCS) and chipsets. NASA’s investigation traced these defects to important software development mistakes. For example, NASA confirmed that no timing analysis had been performed. In particular, no worst-case execution timing (WCET) analysis was conducted because of the complex nature of the […]
By Nupul Kukreja
Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!
Software maintenance activities can be classified as :
- Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
- Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
- Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
- Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)
Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:
How maintainable is my application/source-code, really?
The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.
So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small […]
By Quandary Peak Research
One of Quandary Peak’s experts in web, Internet, and search engine technology recently provided courtroom testimony in a contempt hearing, Golden Best Plumbing, Inc. v. Touni Baghdasarian and Happy Rooter. The Internet expert, Jason Frankovitz, opined on whether the plaintiff’s trade name was being used to market services on the defendant’s Web site in defiance of a previous court order. The matter is case no. BC524978 in the Superior Court of California for the County of Los Angeles.
At issue in the case were particular snippets of computer code, known as HTML tags, that are used by Google, Bing, and other major search engines in determining where to rank pages in the list of results shown for a given query, also known as the “organic” results. For example, HTML tags known “meta description,” “title,” and “h1” have been verified by Google to play a role in its algorithm that computes the relevance of a page. Moreover, the contents of the meta description tag may appear underneath the link to a site in Google’s search results pages. Including a competitor’s trade name within these tags could be viewed as using the name for marketing purposes.
By George Edwards
One of the most difficult questions when creating a new product is how to ensure that the product has a chance to attract a large user base. The good news is that the big players in the technology market such as Facebook, Google, and Twitter let developers create applications that can access the existing user data through publicly available APIs. This means, for example, that every Twitter user is a prospective customer for a well-developed Twitter application.
One of the recent and still underutilized arrivals to the public API space is the Gmail API, which provides RESTful access to a user’s Inbox messages, Inbox configuration, message labels, and the ability to draft and send messages on a user’s behalf. With this toolkit, it would be possible to implement a full web-client that replicates all of Gmail functionality anew. This is not the purpose of the API, however, and the more interesting use cases should come from innovative new features:
- creative Inbox visualizations;
- helping people reduce email clutter (consider trying Unroll.me if you haven’t already);
- providing personalized analytics tools (who do you communicate with the most?);
- extracting important event details;
- putting reminders directly into an Inbox;
- and many others.
Accessing the Gmail API
The high-level process for accessing the Gmail API is depicted in […]
By George Edwards
The importance of software patents has increased dramatically in recent years. The high profile patent disputes between Apple, Google, and other smartphone companies have attracted the most attention in the press, but patent battles are being waged all across the software industry. As a computer scientist who has been retained as an expert in software patent disputes, I have seen firsthand how attorneys go about selecting experts to provide technical consulting and testimony in these cases. And unfortunately, most of them are going about it the wrong way. In this article, I explain why, and what the right way to select software experts is.
Two Distinct Expert Tasks
Successfully arguing that an accused software product does (or does not) infringe a particular patent requires two distinct tasks to be performed by experts: code analysis and expert testimony. First, the accused products, which are computer programs, must be analyzed to determine exactly which portions of the computer code implement the patented invention. Second, a persuasive argument, based on the evidence produced by the analysis, that the patent is (or is not) being infringed must be made in expert reports, depositions, and trial testimony.
The two tasks mentioned above seem so intrinsically linked that the obvious decision is to hire a single expert to perform both tasks. Attorneys reason that an expert should be someone who will impress a jury and intimidate the opposition, such as a senior technology professional with an established reputation in the field or a university professor with an extensive publication record in the relevant technology. Attorneys then assume that whoever is selected to prepare the expert report and testify must also personally perform the code analysis.
However, a stronger case can be built – and money saved – by […]
By Quandary Peak Research
A FoxNews.com story on computer glitches in the HealthCare.gov website features several quotes by Quandary Peak software expert George Edwards. The article adds to a recent flurry of reports chronicling bugs in the new federal insurance marketplace. Dr. Edwards explained the inherent challenges faced by engineers in building the site, and noted that it’s impossible to tell at this stage how serious the defects are.
“I wouldn’t rule out the possibility” of ongoing problems with the website, Edwards is quoted as saying. If flaws exist in the site’s core architecture, they could take some time to fix.
Edwards also pointed out how the site’s engineers were, in some sense, given an impossible task. For example, the October 1 unveiling of the site was driven by public policy, rather than technical, considerations.
The full article is available here on the FoxNews.com website.