Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance.
He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited, the host of the UK Test Management Forum and the Programme Chair for the 2014 EuroSTAR testing conference.
In 2010 he won the EuroSTAR Testing Excellence Award and in 2013 he won the inaugural TESTA Lifetime Achievement Award. He is the Programme Chair of the EuroSTAR 2014 conference. He’s been programming since the mid-1970s and loves using the Python programming language.
AIQA: How did you get into Software Testing?
Paul Gerrard: My first job in 1980, was as a graduate civil engineer, but on the first day, the boss said, “We have no work for you at the moment, but we’ve had a new computer delivered. Here’s the manuals – go figure out what we can do with it” – it was a small office so I rapidly became the office computer expert and the bug bit. Since 1981, I’ve been working in the software and testing business ever since.
After a couple of job moves in the early 80s, I ended up working for Mercury Communications – the first competitor to British Telecom in the newly deregulated telecom market in the UK.
For about three years, I ran a small team that was embedded in a company HQ. We shared an office with our users and had dedicated on-site customers. Although we worked on ‘green screens’ connected to central computers, we prototyped ran paper walkthroughs of screen designs, we released software every few days. We used source code management tools, and even automated build and test tools. We relied on our users to test quite a bit and got a reputation in the IS department for ’going native’. I would say that we were lean and somewhat agile.
In 1992, I joined a software testing services firm, Systeme Evolutif (SE) in London. The SE business was founded on three core services – consultancy in software testing, testing training and a small amount of outsourcing. Having helped to organise the first Eurostar 1993, with Dorothy Graham and SQE from the US, we managed the whole conference in years 1994 and 1995. We have contributed to the BCS SIGIST, the British Standard BS 7925, ISEB Testing Certificate schemes, the early years of the DSDM Consortium, and since 2004, we have hosted the UK Test Management Forum.
In 2007, I bought the SE business and re-branded it Gerrard Consulting and we are active in are UK and international testing conference. I also have a non-exec role on the Technology Advisory Board of TestPlant Limited.
Genislab Technologies: Why do you think the Internet of Things is important?
Paul Gerrard: The Internet of Things or the Internet of Everything (I’ll use “IoE” as shorthand) is the most exciting change in our industry since client/server came on the scene in the late 1980s. I think client/server was important because the Internet, Web services, mobile computing are all essentially client/server implementations. Some would argue that Object-Oriented analysis, design and development are significant, but I think these affect only programmers and designers. Agile is significant – but I think it is transitional. Continuous delivery and DevOps are bringing factory automation processes into software and are perhaps the most appropriate approach for IoE implementations. After Waterfall and Agile, Continuous Delivery and DevOps are becoming ‘the third way’.
The Internet of Everything, if it develops in the way that some forecasters predict, will affect everyone on the planet. Estimates of the number of connected devices on the IoE range from 50 billion devices to 700 billion devices over the next 5 to 20 years. No one knows of course, but the expectation is that we are on a journey that will increase the scale of the internet by one hundred times. This will take some testing! But how on earth will we approach this challenge?
Genislab Technologies: What will we need to test for in the IoE?
Paul Gerrard: Last year, I was commissioned to write an article series on Testing the Internet of Everything. The first two articles (as well as a lot of other papers and articles) can be found here. These two papers set the scene and the scale of the challenge, but also introduce a seven-layer architecture that might help people to understand the features that must exist to make the IoE happen. The second paper also sets out at a high level some of the risks that we must address.
I am writing the third instalment – on IoE Test Strategy – right now. I have to tell you that figuring out how we test what you might say is every system component that exists now and forever is proving quite difficult! I won’t be solving all of the testing challenges in the next few weeks. But what I can perhaps do is suggest some of the dimensions, some of the influences, and some of the opportunities of the IoE testing problem. I will try and set the scene for what we have to think about here.
Scale: The obvious first challenge is scale of course, but that is primarily a logistical problem for implementers. Of course, there will be the scalability challenges at various levels of the architecture and we’ll have to do some large-scale load, performance and stress testing.
Hardware-Level functionality: The lowest level devices are sophisticated, but essentially perform simple functions like sensing the value of something or changing the setting, position or speed of something. These devices are packaged into objects that will need testing in isolation and most of this will be performed by manufacturers.
Object and Server level functionality: The vast majority of functionality that needs testing will reside on local hubs and aggregators and data-centre-based server infrastructure. Internet-based and native mobile apps will deliver the data, visualisations and control over other aspects of the architecture. Architectures will range from simple web-apps to systems with ten, twenty or more complex sub-systems.
Mobile objects: Testing static objects is one thing, but testing objects that move is another. Mobile objects move in and out of the range of networks, they roam across networks, the environmental conditions at different locations vary and may affect the functionality of the object itself. Our sources of data and the data itself will be affected by the location and movement of devices. Mobile devices will drift into and out of our network range, but also drift into and out of other, not necessarily, friendly networks. Power, interference, network strength, roaming and jamming issues will all have an effect.
Moving networks: Some objects move and carry with them their own local network. A network that moves will encounter other networks that interfere or may introduce a rogue or insecure network into your vicinity and pose a security problem. Cars, buses, trains, aeroplanes, ships, shopping trolleys, trash trucks, hot-dog stalls, tractors – almost anything that moves – might carry with them their own networks or join foreign networks as they encounter them.
Network security risks at multiple levels: Rogue devices that enter your network coverage area might eavesdrop or inject fake data. Rogue access points might hijack your users’ connections and data. Vulnerable points at all levels in your architecture are prone to attack. There are security risks at all levels of your network architecture that will need to be addressed.
Device registration, provisioning, failure and security: Devices may be fixed in location but the initial registration and provisioning (configuration) are likely to be automatic. More complicated scenarios arise where devices move in and out of range of a network or transition between networks. Needless to say, low-power devices fixed in perhaps remote locations are prone to power failures, snow, heat, cold, vandals, animals, thieves and so on. Power-down, power-up and automated authentication, configuration and registration processes will need to be tested.
Collaboration confusion: Mobile, moving devices will collaborate with fixed devices and each other in more and more complex ways and in large numbers. For example, in a so-called Smart City, cars will collaborate as a crowd to decide optimum routes so every car gets to its destination efficiently. But, however these resources are controlled, car park spaces will become available and unavailable randomly, so the optimisation algorithm must cope with rapidly changing situations. At the same time, these services must not confuse public services, commercial vehicle drives, private car drivers and passengers. Managing the expectations of users will be a particular challenge.
Integration at all levels: Integration of physical devices or software components will exist at every level. Integration of data will encompass the flows of data that is correctly filtered, validated, queued, transmitted and accepted appropriately. Many IoE devices will be sensors chirping a few bytes of data periodically, but many will also be software components, servers, actuators, switches, monitors, trip-ups, heaters, lifts, cars, planes and factory machinery. The consistency, timeliness, reliability and of course safety of control functions will be a major consideration. Industry standards in application domains that are safety-related are likely to have a role. However, for now, much of the legislation and standardisation that will be required does not yet exist.
Big Data – logistics: Needless to say, much of the data collected by devices will end up in a database somewhere. Some data will be transactional, but most of it will be collected by sensors in remote locations or connected to moving objects. A medium sized factory might collect as much as a Terabyte (1,000,000,000,000 bytes) per day. Performance and reliability requirements might mean this data must be duplicated several times over. Wherever it is stored (and for however long) a very substantial data storage service will be part of the system to be tested.
Big-Data – Analysis and visualisation: Analyses of data will not be limited simply to tabulated reports. The disciplines of data science and visualisation are advancing rapidly, but these rely on timely, accurate and consistent data; they rely on data acquisition, filtering, merging, integration and reconciliations of data from many sources, many of which will never be under your control. Often, data will be sparse or collected infrequently or at random times. Data will need to be statistically significant, smoothed, extrapolated and analysed with confidence.
Personal and corporate privacy: The privacy of data – what is captured, what is transmitted, what is shared with trusted or unknown 3rd parties or stolen by crooks and misused is probably peoples’ most pressing concern (and this is also a barrier to exploitation of the IoE). The current legal framework (e.g. the Data Protection and privacy laws in the UK) may not be sufficient to protect our personal or corporate privacy. Hackers and crooks are one threat, but central government listening to all our data, creating personalised data envelopes to track crooks and terrorists will probably track all citizens and not just suspects. Your government may be seen to be a villain in this unfolding story.
Wearables and Embedded: Right now, there is a heavy focus on wearable devices such as training and activity trackers, heart monitors that connect with apps on mobile phones and data can be integrated with Google Maps to show training routes, for example. Other devices such as smart watches, clothing and virtual reality headsets are emerging. But there are increasing numbers of applications where the device is not worn, but are embedded in your body. Healing chips, cyber pills, implanted birth control, smart dust and the ‘verified self’ (used to ID every human on the planet) are all being field tested. It will be hard not to call human beings ‘things’ on the internet before too long. Will we need to hire thousands of testers with devices embedded in their body? Surely not. But testing won’t be as simple firing off thousands of identical messages to servers using the functional and performance test tools we have today. Something much more sophisticated will be required.
Genislab Technologies: How will IoE change the way we test?
Paul Gerrard: Let me suggest that there have been some quantum leaps in the complexity of our systems and the IoE is the next step on our journey. Let me recount the history of our software testing journey. I know testing started before we had ‘green screens’, but it is a convenient starting point.
- Screen based applications running on dumb terminals were the norm up to the mid-1980s or so (and of course are still present in many companies). Each screen typically had a single entry point and a single exit point, the content of the screen and data input was limited to text. Life was relatively simple – but systems had many screens.
- With the advent of GUI applications, it became possible to have many windows active at the same time. There were many ways in and ways out. The content of screens could now include graphics, sounds and video. Windows applications had to deal with events – triggered by the user clicking anywhere on screen or from other windows or applications resident in the GUI environment.
- At the same time as GUIs arrived, client/server became the architecture of choice for most applications. The scope of an application was no longer limited to workstations but might involve the collaboration of many connected servers collaborating with (often flaky) middleware. Performance and reliability become more prominent risks.
- The emergence of the internet, although hailed as a revolution is really an instance of client/server. What was different though, was the exposure of private networks, functionality and data to the public internet. Security and privacy become much bigger concerns.
- We are in the thick of the ‘Mobile Revolution’ right now. The democratisation of software means applications are available anytime and anywhere on varied devices and configurations in numbers far beyond our ability to test comprehensively. The proliferation of apps and devices and our enthusiasm for this expansion knows no bounds.
- And now, on top of all of the technologies above, we expect that the network of connected ‘things’ will increase the scale of the internet by perhaps 100 times. These devices range in sophistication from 50 cent components to buildings, ships, cars and aeroplanes. And that’s the point – IoE brings all of the challenges I mentioned earlier ‘plus all of the above’ with the full range of sophistication.
The IoE brings, potentially, a new level of complexity, but brings in new levels of scale. The non-functional risks are reasonably well-known. What is new is the need to do functional testing and simulation at scale.
Read the second part of the interview here.