By Jason Frankovitz
Every software expert at Quandary Peak regularly reads and analyzes large volumes of source code, and while understanding the specific programming language can help, it’s not the only skill we bring to a project. Knowing the unique qualities and nuances that apply to programming code can be the difference between winning and losing a trial. In this eight-part series of posts I’ll describe a few of the unintuitive aspects of source code that experts have to keep in mind when working on your case.
Source Code Never Runs
Like attorneys, software programmers are in the business of technicalities, where small distinctions can sometimes make a big difference. This is particularly important in software patent cases, which can often hinge on very minor, esoteric aspects of code (very minor to a non-computer-scientist, at least.) One of those potential trivialities is the distinction between the code written by programmers and what instructions the computer executes. As we shall see, virtually all the code written by programmers never runs on any computer. How is this possible? To explain this apparent contradiction, we need a bit of history.
Computers prefer numbers
Alan Turing was an early computer scientist who was instrumental in the Allies’ World War II victory. Seventy years ago he theorized a device, now called a Turing machine, that could read and write limitless numbers on an endless paper tape, to complete arbitrary mathematical tasks.
Later, in the 1970s, the first build-it-yourself home computer, the Altair 8800, was great at mathematical tasks, but not so great at inputting them: it had no keyboard. To enter a number into the Altair, you had to look up which switch positions on the front panel represented the number you wanted to enter, flip the switches accordingly, then flip a final switch to commit the number into memory.
By Quandary Peak Research
In April 2016, the Campaign for Accountability (CfA) launched the Google Transparency Project (GTP), aimed at creating an online resource for the public to explore the relationship between Google and the government—and the impact of this relationship on American lives. In a statement, CfA Executive Director Anne Weisman said, “Google has long been a strong advocate of transparency in government, business, and even users’ private lives. It has not, however, been transparent about its own dealings with the government.” While other issues targeted by the CfA appear to have a more industry-wide spotlight, the Google Transparency Project focuses specifically on Google’s relationship with the government and its influence on public policy.
Major reports released by the GTP thus far examine the scope of Google’s access to the White House during Obama’s presidency as well as the suggested ‘revolving door’ shared between Google employees and those of the White House. Data compiled in the reports is accessible for public review and is accompanied by analysis from the CfA.
On the surface, this effort seems like it would be cut-and-dry in the public’s interest. Google is the second largest company in the world by market capitalization (second only to Apple Inc.), and it spent over $16 million on lobbying in 2014. Not a meaningful number to Google, but certainty a meaningful number in the lobbying world. $16 million in 2014 puts Google just below Comcast and Pharmaceutical Research and Manufacturers of America in lobbying dollars spent.
Knowing what goes on in the lobbying world and between government and business is critical, but the GTP has a transparency problem of its own. As Fortune’s Jeff John Roberts noted back in April, “The folks running the Google Transparency Project won’t say who is paying for it, which is odd […]
By Mahdi Eslamimehr
When human health and safety depends on software, correct system operation is the most important concern. Unfortunately, a lack of proper testing and verification in such systems allows program defects to remain undetected and later turn into hazardous failures. Here are a few examples from a long history of failing programs that have cost lives:
-Therac-25 (the successor of Therac-6 and Therac-20) was a radiation therapy machine manufactured by Atomic Energy of Canada Limited (AECL). The machine had two modes of radiation therapy: high power and low power. In Therac-25’s predecessors, switching between modes was controlled by a hardware lock. The hardware lock was replaced by a software lock in Therac-25 and failed because of an undetected race condition. This fatal bug caused the death of 3 patients between 1985 and 1987. Several lawsuits were filed as a result of these accidents in the late 80’s. Nancy Leveson, a professor at the University of Washington, and Clark Turner, a graduate student at the University of California at Irvine, conducted a long investigation into the software defect and published a report in IEEE Computer in 1993.
-Toyota was forced to recall more than 10 million vehicles between 2009 and 2011 due to an unintended acceleration problem. Reports from National Highway Traffic Safety Administration documented 6,200 complaints involving unintended acceleration of Toyota vehicles. Further investigation revealed 89 deaths and 57 injuries were potentially linked to major Toyota recalls. Even NASA investigated the issue and published a public report with their findings . Several problems were detected in Toyota’s Electronic Throttle Control System (ETCS) and chipsets. NASA’s investigation traced these defects to important software development mistakes. For example, NASA confirmed that no timing analysis had been performed. In particular, no worst-case execution timing (WCET) analysis was conducted because of the complex nature of the […]
By Nupul Kukreja
Software maintenance costs result from modifying your application to either support new use cases or update existing ones, along with the continual bug fixing after deployment. As much as 70-80% of the Total Ownership Cost (TCO) of the software can be attributed to maintenance costs alone!
Software maintenance activities can be classified as :
- Corrective maintenance – costs due to modifying software to correct issues discovered after initial deployment (generally 20% of software maintenance costs)
- Adaptive maintenance – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
- Perfective maintenance – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
- Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs)
Since maintenance costs eclipse other software engineering activities by large amount, it is imperative to answer the following question:
How maintainable is my application/source-code, really?
The answer to this question is non-trivial and requires further understanding of what does it mean for an application to be maintainable? Measuring software maintainability is non-trivial as there is no single metric to state if one application is more maintainable than the other and there is no single tool that can analyze your code repository and provide you with an accurate answer either. There is no substitute for a human reviewer, but even humans can’t analyze the entire code repositories to give a definitive answer. Some amount of automation is necessary.
So, how can you measure the maintainability of your application? To answer this question let’s dissect the definition of maintainability further. Imagine you have access to the source code of two applications – A and B. Let’s say you also have the super human ability to compare both of them in a small […]
By Sam Malek
Security has become the Achilles’ heel of most modern software systems. Techniques ranging from manual inspection to automated static and dynamic code analyses are commonly employed to identify security vulnerabilities prior to the release of software. However, these techniques are time-consuming and cannot keep pace with the growth of software repositories, such as Google Play and the Apple App Store, that host millions of apps.
An opportunity to tackle this issue is presented by the fact that the software products in these repositories are increasingly being organized into categories. Some examples are SourceForge for open source and Google Play for Android applications. In addition to helping users search and browse for apps, categorized repositories have been shown to be good predictors of the common features found within software of a particular category.
In a recent publication, Quandary Peak software expert Prof. Sam Malek and his team of researchers at George Mason University show that knowing the category of an Android application is sufficient for accurately predicting the types of security vulnerabilities that application may have. The approach works by mining a large number of apps available on the public app markets (e.g., Google Play). The apps are then analyzed for known security vulnerabilities, which can be detected through a variety of static analysis tools. The vulnerabilities detected in these apps are then used to build a classifier that can determine with a very high accuracy the types of security vulnerabilities one may encounter in a new app of a certain category.
This research has significant implications for the consumers and app market operators, as it allows them to determine the types of security risks posed by applications of different category without requiring any specialized tools or detailed analysis of the software. It could also help a security analyst to target the […]
By Quandary Peak Research
Legal scholars point to a handful of philosophical theories as the basis for the moral judgment that patents on technology are just and beneficial to society. Are these theories valid in the context of the modern software industry? In particular, do the unique characteristics of software, in contrast to other forms of technology, make the traditional rationales used to justify patents less valid for software patents?
The “Natural Rights” Justification
John Locke, a 17th century English philosopher, developed the intellectual foundation for the natural rights justification for patents. Put very simply, this argument posits that individuals have a right to ownership of their own labor, and when people apply their labor to create things of value, they obtain ownership of those things. As applied to patents, this theory argues that inventors have invested their labor to design a process or machine, and are entitled to own that process or machine.
Does this theory hold up in practical terms, when viewed in the context of the modern software industry? For the most part, yes: there are very few who question whether software engineers deserve to own the fruits of their labor.
One problem, however, with applying the natural rights rationale to software patents is that it doesn’t take much effort to imagine an abstract computer program that does something useful that no one has done before; all the hard work is in making it work well. Thinking of what you want a program to do is much easier than making it reliable, scalable, user-friendly, and so on. If “inventing” doesn’t require actually implementing the program in a way that has measurable quality, people don’t actually need to do much work to obtain a software patent.
The patent law contains a requirement for “enablement” that is supposed to ensure that inventors disclose how to make their inventions […]
By George Edwards
The Joint 10th Working IEEE/IFIP Conference on Software Architecture & 6th European Conference on Software Architecture (WICSA/ECSA 2012) was held this week in Helsinki, Finland. Along with keynote addresses and research presentations, the conference included a panel discussion of the impact of software architecture research on software architecture practice. The panel included practicing software architects who are also involved in research and a professor who collaborates extensively with the software industry. In that panel, multiple speakers agreed on a few points:
- Less silo-oriented research: architects need to deal with problems that require cross-discipline expertise, so one-dimensional, specialized research does not help them much.
- Architects need more domain-specific research to use in practice, not simply general knowledge.
- The research community should focus less on low-hanging fruit and more on long-term problems.
Lastly, most of the speakers raised the issue of transferring research to practice, but differed on ways of solving or even formulating the problem. They also raised the issue of a researcher’s need to publish, which practitioners don’t care about, as opposed to an architect’s need to create products for customers. Regarding that point, they raised the issue but had little else to say about it.
The slides for these talks can be found at: http://www.wicsa.net/.
By George Edwards
A new research study to be published in September suggests a novel approach to designing computer systems that facilitate collaboration in large groups. The research leverages computer simulations of large “socio-technical” systems, such as social networks and wiki communities, to investigate how technology helps people collaborate effectively. This information can then be used by the designers of computer-based collaboration tools to encourage the most productive types of interactions among users, based on the type of work being done.
Here’s an example: a large team is assigned a new project to work on. Communication between members of the team can occur in a variety of ways, including through a social network, email, wikis, video chat, and so on. Similarly, work products produced by team members can be distributed and managed in a variety of ways. What mechanisms are best suited to the project? What tools should be used to ensure that team members get all the information they need, without being overloaded by information they don’t care about?
That’s where simulation comes in. By creating a computer simulation of the interactions among team members, the trade-offs among different collaboration options can be investigated. Some options might result in higher coordination overhead, while others might incur a risk of users operating on out-of-date information. The best option depends on the needs of the team and the characteristics of the project.
The study, authored by computer scientists at the University of Southern California, the University of California – Irvine, and Quandary Peak Research, will be presented at the upcoming International Conference on Cooperative Information Systems (CoopIS 2012) in Rome, Italy. The article is titled Analyzing Design Tradeoffs in Large-scale Socio-Technical Systems through Simulation of Dynamic Collaboration Patterns.
By George Edwards
A research team composed of computer scientists at the University of Southern California, the University of Washington, and Quandary Peak Research has developed a new architecture for software modeling and analysis tools. The proposed architecture, implemented in a prototype tool suite called LIGHT, allows users to customize the appearance and content of graphical models of complex software systems, and then run automated analysis algorithms and generate executable simulations.
Existing state-of-the-art modeling and analysis tools provide either the ability to customize models (through a process termed “metamodeling”), or perform analysis and simulation out-of-the-box, but not both. LIGHT combines both features in an single platform.
The results of the project, which will be presented at this year’s Joint Working IEEE/IFIP Conference on Software Architecture & 6th European Conference on Software Architecture (WICSA/ECSA 2012), demonstrate how using the architecture can save software engineers and analysts significant tool-building and customization effort. The conference will be held on August 20 – 24 in Helsinki, Finland.