By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a trial. In this eight-part series of posts I’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
Source Code Never Runs
Like attorneys, software programmers are in the business of technicalities, where small distinctions can sometimes make a big difference. This is particularly important in software patent cases, which can often hinge on very minor, esoteric aspects of code (very minor to a non-computer-scientist, at least.) One of those potential trivialities is the distinction between the code written by programmers and what instructions the computer executes. As we shall see, virtually all the code written by programmers never runs on any computer. How is this possible? To explain this apparent contradiction, we need a bit of history.
Computers prefer numbers
Alan Turing was an early computer scientist who was instrumental in the Allies’ World War II victory. Seventy years ago he theorized a device, now called a Turing machine, that could read and write limitless numbers on an endless paper tape, to complete arbitrary mathematical tasks.
Later, in the 1970s, the first build-it-yourself home computer, the Altair 8800, was great at mathematical tasks, but not so great at inputting them: it had no keyboard. To enter a number into the Altair, you had to look up which switch positions on the front panel represented the number you wanted to enter, flip the switches accordingly, then flip a final switch to commit the number into memory.
By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a case. In this eight-part series of posts we’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
What is dead code?
Dead code is source code that is present in the program but doesn’t actually get used by the program. It’s not uncommon for a software product to have hundreds of thousands (even millions) of lines of code, and not all of them are used every time they’re run. Essentially, there are two situations which can result in dead code: one, if a condition required to run a piece of code is never fulfilled, or two, if the program provides no execution path into the code. In both cases the code exists in the program, but the program never executes it, resulting in dead code.
For the first situation, here’s an oversimplified example to illustrate the mechanism:
Above, if the $DAY_OF_WEEK variable has the value of “Monday” or “Friday”, either the complain_about_work or the make_dinner_reservations method will run. The method infringe_valuable_patent will never run because there’s no such day as “Kronsday”. Because the condition to execute the patent-infringing code will never be true, the infringe_valuable_patent method will never run; it is dead code.
No route to execution
With just a small change we can illustrate how the second […]
By Jason Frankovitz
It can be hard to know if the Internet connection you’re paying for (at home or at work) is really giving you the speed that you’re paying for. With almost 90% of Americans using the Internet regularly, and a market size around $100 billion, both companies and consumers want to know if they’re getting their money’s worth. Simply trusting that your ISP is doing the right thing might not be wise. According to the Seattle Times, the Attorney General of New York is suing Charter Communications for false claims about the speed of its Internet services.
Fortunately, discovering how fast your network connection doesn’t require some arcane knowledge or special equipment. The venerable speedtest.net will test your network connection for free. The site clearly show you your available bandwidth and latency (ping), but there are other benefits too. What’s less well-known about the site is a feature that lets you easily log your network testing over time. Using your log, you can see if today is just a slow network day, or if your ISP is consistently cheating you out of the bandwidth you’re paying for.
How to make your free network speed log
Go to www.speedtest.net, click “Create Account”, and make a new account using your email address and a secure password.
After you log in using your new account, perform a test of your network speed by clicking the “BEGIN TEST” button. You’ll see the site check your ping timing, download and upload speeds.
After the test ends, see the results you just got under the Recent Results table.
It’s easy to compare those results against new test results. Just run another speed test and the site will add […]
By Quandary Peak Research
As VLSI devices shrink to atomic scales, design and analysis become increasingly challenging. Dr. Shahin Nazarian, an expert at Quandary Peak Research and a faculty member in the Electrical Engineering Department at the University of Southern California, has been conducting research on the design and analysis of high speed, energy efficient circuits and systems. He has recently co-authored numerous conference and journal papers in system-level, algorithm-level, device-level, and circuit-level design, analysis and optimization.
More than half a century ago, Gordon Moore predicted that the number of transistors in IC (Integrated Circuit) chips would double roughly every two years . The significance of this prediction is related to the role of transistors as the key logic components in the performance of electronic systems. This prediction, also referred to as Moore’s Law, has sustained its correctness for several decades and has become the primary guidance for the semiconductor community to monitor its long term goals and plans. It is important to note that Moore’s Law is not about the reduction of the size of transistors only, but also cost reduction as well. This implies that design solutions involve high level software and hardware analysis and optimization methodologies in addition to device level techniques.
As the devices have scaled down to 28nm process nodes and below, the traditional bulk CMOS technologies have faced grave challenges linked to device unpredictability and high leakage dissipation induced by short-channel effects and Process-Voltage-Temperature (PVT) variations. There have been massive R&D investments to tackle those issues to help extend Moore’s Law. In parallel, alternative metrics (such as speed per unit power) and laws have emerged as the fear of the end of Moore’s Law grows. One of the most promising solutions has been the adoption of non-planar or quasi-planar FinFET and Gate-All-Around (GAA) transistors that offer more […]
By Quandary Peak Research
Two Quandary Peak experts and their teams presented their latest research at the twenty-sixth International Symposium on Software Reliability and Engineering, one of the most accredited IEEE events.
Dr. Mahdi Eslamimehr with his teammate from MIT won the ISSRE 2016 Best Paper award for “AtomChase: Directed Search Towards Atomicity Violations”, the result of three years’ work developing a state-of-the-art debugging tool for super computers. The Best Paper award is given to top researchers whose work influences academy and industry.
Atomicity violation is one of the main sources of concurrency bugs. Empirical studies show that the majority of atomicity violations are instances of the three-access pattern, where two accesses to a shared variable by a thread are interleaved by an access to the same variable by another thread. Dr. Eslamimehr presented a novel approach to atomicity violation detection that comprises two parts: 1. execution schedule synthesis, and 2. directed concurrent execution based on constraint solving and concolic execution. In comparison to five previous tools on 22 benchmark codebases (like Apache Tomcat with 4.5 million lines of Java code), AtomChase increased the number of three-access violations found by 24% and found errors in programs were wrongfully assumed to be bug-free. To prevent reporting false alarms, Dr. Eslamimehr and his colleague confirmed sufficient conditions for non-atomicity of three-access pattern traces. These conditions could recognize 89% of the actual atomicity violations found by AtomChase. Because checking these conditions is two orders of magnitude faster than a brute-force check, AtomChase has made it possible to improve the quality of industrial software like the popular Eclipse integrated development environment.
Another interesting study on end-to-end Android application analysis was presented at the same symposium by Dr. Sam Malek and his research group at University of California, Irvine.
Pervasiveness of smartphones and the vast number of corresponding apps […]
By Mahdi Eslamimehr
When human health and safety depends on software, correct system operation is the most important concern. Unfortunately, a lack of proper testing and verification in such systems allows program defects to remain undetected and later turn into hazardous failures. Here are a few examples from a long history of failing programs that have cost lives:
-Therac-25 (the successor of Therac-6 and Therac-20) was a radiation therapy machine manufactured by Atomic Energy of Canada Limited (AECL). The machine had two modes of radiation therapy: high power and low power. In Therac-25’s predecessors, switching between modes was controlled by a hardware lock. The hardware lock was replaced by a software lock in Therac-25 and failed because of an undetected race condition. This fatal bug caused the death of 3 patients between 1985 and 1987. Several lawsuits were filed as a result of these accidents in the late 80’s. Nancy Leveson, a professor at the University of Washington, and Clark Turner, a graduate student at the University of California at Irvine, conducted a long investigation into the software defect and published a report in IEEE Computer in 1993.
-Toyota was forced to recall more than 10 million vehicles between 2009 and 2011 due to an unintended acceleration problem. Reports from National Highway Traffic Safety Administration documented 6,200 complaints involving unintended acceleration of Toyota vehicles. Further investigation revealed 89 deaths and 57 injuries were potentially linked to major Toyota recalls. Even NASA investigated the issue and published a public report with their findings . Several problems were detected in Toyota’s Electronic Throttle Control System (ETCS) and chipsets. NASA’s investigation traced these defects to important software development mistakes. For example, NASA confirmed that no timing analysis had been performed. In particular, no worst-case execution timing (WCET) analysis was conducted because of the complex nature of the […]
By Nupul Kukreja
Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!
Software maintenance activities can be classified as :
- Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
- Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
- Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
- Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)
Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:
How maintainable is my application/source-code, really?
The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.
So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small […]
By George Edwards
The importance of software patents has increased dramatically in recent years. The high profile patent disputes between Apple, Google, and other smartphone companies have attracted the most attention in the press, but patent battles are being waged all across the software industry. As a computer scientist who has been retained as an expert in software patent disputes, I have seen firsthand how attorneys go about selecting experts to provide technical consulting and testimony in these cases. And unfortunately, most of them are going about it the wrong way. In this article, I explain why, and what the right way to select software experts is.
Two Distinct Expert Tasks
Successfully arguing that an accused software product does (or does not) infringe a particular patent requires two distinct tasks to be performed by experts: code analysis and expert testimony. First, the accused products, which are computer programs, must be analyzed to determine exactly which portions of the computer code implement the patented invention. Second, a persuasive argument, based on the evidence produced by the analysis, that the patent is (or is not) being infringed must be made in expert reports, depositions, and trial testimony.
The two tasks mentioned above seem so intrinsically linked that the obvious decision is to hire a single expert to perform both tasks. Attorneys reason that an expert should be someone who will impress a jury and intimidate the opposition, such as a senior technology professional with an established reputation in the field or a university professor with an extensive publication record in the relevant technology. Attorneys then assume that whoever is selected to prepare the expert report and testify must also personally perform the code analysis.
However, a stronger case can be built – and money saved – by […]
By Quandary Peak Research
Brad Ulrich, a computer scientist at Quandary Peak Research, was recently engaged to provide software analysis and consulting for Step-By-Step Academy, an Ohio-based company, in the matter of Step-By-Step Academy, Inc. v. Special Learning, Inc in U.S. District Court in the Southern District of Ohio. Special Learning is a Chicago-based company.
The case involves a dispute over the development of electronic health record (EHR) software and software designed to meet Commission on Accreditation of Rehabilitation Facilities (CARF) compliance standards. Mr. Ulrich is evaluating software developed under contracts, and the technical performance and requirements under those contracts. Claims in the case include breach of written and oral agreements/contracts, detrimental reliance, promissory estoppel, quantum meruit, and breach of covenant of good faith and fair dealing.
Mr. Ulrich’s work with the case is ongoing.
By Quandary Peak Research
Nokia’s recent settlement with HTC over smartphone patents was widely reported in the media on Friday, resulting in the dismissal of all pending patent litigation worldwide between the two companies. Industry observers widely cited the outcome as an important step for Nokia in its effort to generate royalties from its patent portfolio.
Right up until the settlement was finalized, several of our software experts here at Quandary Peak were working earnestly with the attorneys at Desmarais LLP on behalf of Nokia. Nenad Medvidovic, George Edwards, and Brad Ulrich were all down in the trenches, performing the painstaking work of documenting the technical evidence related to the patents-in-suit in the matters before the International Trade Commission and the U.S. District Court for the District of Delaware. The patents disclosed inventions related to receiving push messages, downloading and installing apps over the network, video compression and encoding, protecting sensitive device configuration data, Bluetooth connectivity, calendar data management, and other technologies. The analysis by Quandary Peak included C++ source code review and Java source code review.