The software development lifecycle continues as long as connected cars are on the street
December 14, 2017
Connectivity changes the notion of the development process ending when a product is launched, or even when its production is ended.
Automotive embedded applications have traditionally been isolated, static, fixed-function, device-specific implementations, and practices and processes have relied on that status. There’s lots to learn for developers in the newly connected automotive sector – domain separation, secure boot, defense-in-depth – and they're all rocking the ordered worlds of many a development team.
Connectivity changes things dramatically because it makes remote access possible while requiring no physical modification to the car’s systems, most famously demonstrated in Miller and Valasek’s work, “Remote Exploitation of an Unaltered Passenger Vehicle”. But perhaps the biggest upheaval has nothing to do with any of these new technologies per se, and instead is more concerned with the simple notion of “coming to the end of a project”.
Beyond development, connectivity gives system maintenance a new significance with each newly discovered vulnerability because requirements have the potential to change after development – and not just throughout a production lifetime, but as for long as a product is out in the field.
Safe and secure application code development
The traditional approach to secure software development in enterprise environments is mostly a reactive one – develop the software, and then use penetration, fuzz, and functional test to expose any weaknesses. Useful though that approach is, in isolation it is not good enough to comply with a functional safety standard such as ISO 26262, which implicitly demands that security factors with a safety implication are considered from the outset, because a safety-critical system cannot be safe if is not secure.
A V-model illustrates the scale of the problem, by representing the software-related phases defined within ISO 26262. Figure 1 illustrates such a model, with cross references to both the ISO 26262 standard, and to tools likely to be deployed at each phase in the development of today’s highly sophisticated and complex automotive software.
Requirements, UML designs, coding standards, static analysis reports, unit test, code coverage analysis… each phase of this process model implies a host of design, development and test artifacts, in addition to the code itself and the tests that go with it. The principle of bi-directional traceability runs throughout, with each development phase being shown to accurately reflect the one before it and after it.
All of that would represent more than enough stress for the project management team even if everything always ran in a neatly sequential manner, in accordance with the theory of the process model. In that halcyon world, the exact sequence of the standard would be adhered to, the requirements would never change, and tests would never throw up a problem. But life is not like that.
Managing change during development and beyond
Changes to requirements can come from a multitude of sources. What if the client has a change of heart or a new requirement? What if legal obligations change?
Should such changes become necessary, an impact analysis is required on all existing requirements, designs, code, and static and dynamic tests. Impacted code needs to be reanalyzed statically, and all impacted unit and integration tests need to be re-run (regression tested).
Automated bi-directional traceability helps by linking requirements from a host of different sources through to design, code and test, and the artifacts that go with them (Figure 2). The impact of any requirements changes – or, indeed, of failed test cases – can be automatically assessed by means of impact analysis, and addressed accordingly by means of automated testing and analysis. As part of that process, artifacts can be automatically regenerated to present evidence of continued compliance to ISO 26262.
During the development of a traditional, isolated system, that is clearly useful enough, but connectivity demands the ability to respond to vulnerabilities because each newly discovered vulnerability implies a changed or new requirement, and one to which an immediate response is needed – even though the system itself may not have been touched by development engineers for quite some time. In such circumstances, being able to isolate what is needed and automatically test only the functions implemented becomes something much more significant.
Connectivity changes the notion of the development process ending when a product is launched, or even when its production is ended. Whenever a new vulnerability is discovered in the field, there is a resulting change of requirement to cater for it, coupled with the additional pressure of knowing that in such circumstances, a speedy response to requirements change has the potential to both save lives and enhance reputations. Such an obligation shines a whole new light on automated requirements traceability.